Building sustainable data centers is hard — especially if you’re trying to do it in office space in Houston. Plus, the idea of operating some kind of power-generation plant for offering renewable energy such as solar or biogas is a scary prospect for data center operators. These were among the key takeaways (along with a few less-obvious lessons) from a panel on sustainable data centers at the Open Compute Summit held today in San Antonio, Texas.
Bill Weihl, manager of energy efficiency and sustainability at Facebook and the former energy czar at Google, moderated the panel that also featured Melissa Gray, the head of sustainability for Rackspace; Stefan Garrard who is building an HPC cluster for oil company BP; Winston Saunders from Intel; and Jonathan Koomey, a consultant and energy-efficiency expert. While we are entering the age of 100-megawatt data centers the size of football fields, we’re also dealing with higher energy costs and concerns about how to keep our webscale infrastructure running. As part of the focus on lowering costs, The Open Compute Project spends a lot of time on sustainability.
But lowering the energy inside a data center can only go so far. Saunders explained that chips for example, had achieved their lowest possible power utilization without new breakthroughs. Even when idle, the chips still consumer 20 percent of their maximum energy draw because they can’t fully turn themselves off. The inability to power all the way down is a function of adding latency (once something is turned off it takes time to turn it back on) and because powering down the chip requires the data center to stop sending information. However, data centers rarely hit that point, which means chips are always “awake” and consuming energy.
But it’s not just the hardware. Garrard said his current high-performance computing cluster is running in office space that holds both humans ans servers. He’s done a little to help make things more efficient, but because of the office location and Houston’s hot and humid climate, his servers run at a power usage effectiveness of more than 2 (Facebook, which has heavily optimized its PUE is about 1.07; 1.0 is ideal). So, he is building out a new facility and hopes to get closer to a PUE of 1.5.
But where will the power for his and other new data centers come from? Renewables aren’t really on the list yet. When asked about using biogas systems such as those from Bloom Energy or solar, Gray said the idea of running a generation plant along with a data center was so far outside her core competency that it wasn’t really something she thought about.
Koomey, however, called the idea that a data center operator has to follow in Apple’s footsteps to operate their own generation (Apple is using Bloom’s boxes to power part of its new data center) a “canard” and said data center operators should get renewable power from their utilities. Weihl, who helped Google buy wind power from providers for its data centers, agreed.
The panel essentially outlined several areas where data center infrastructure consumes energy. In the ideal world, operators could site their data centers in places that are cool and dry, and build out the ideal facility and hardware to reduce the power draw. As Koomey said, they could think “holistically.”
Unfortunately, most data centers are built in the real world, where and when they are needed with the equipment available at the time. The standards and designs offered by the Open Compute Project will help, but the real world will take its toll.
Data Center Racks provides the best solutions for Data center servers
ReplyDelete