Abstract

Data center infrastructures are highly underutilized on average. Typically, a data center manager computes
the number of servers his facility can host by dividing the total power capacity of each rack by an assigned “peak
power rating” for each server. However, this scheme suffers from the weakness of all static provisioning schemes
– it does not account for the variability of load on the servers. We propose an algorithm that studies the power
consumption behavior of the servers over time and suggests optimal ways to combine them in racks to maximize
rack power utilization. The server placement problem is a version of vector bin packing, and our solution –
RackPacker – approximates a near-optimal solution efficiently using a number of domain-specific optimizations.
One of the central insights we use is that the different servers hosting a single application typically show strongly
correlated, but often somewhat time-shifted, power consumption behavior. Hence, we find servers that show anticorrelated,
or strongly time-shifted behavior and pack them together to maximize rack utilization. Our initial
experiments with RackPacker show substantially superior results than static packing.