Monday, March 26, 2012

Google: Our PUE is Lower, and It's Scrupulous » Data Center Knowledge

Google: Our PUE is Lower, and It's Scrupulous » Data Center Knowledge

Google: Our PUE is Lower, and It's Scrupulous » Data Center Knowledge

Modern data center borrows medieval security tactics | ITworld

March 26, 2012, 3:45 PM Its location cannot be disclosed (it is somewhere on the eastern seaboard), has a drainage pond that acts as a moat, and it has seven 20,000-square-foot areas (or pods), all connected.

It is Visa’s data center, featured in this article on USA Today. A good read, for sure.

Fast Company wrote about the data center – known as OCE, or Operations Center East – last year. That article is equally interesting, and equally a good read.

Visits to the data center are reportedly rare. Visa, after all, has to keep safe a gazillion credit card transactions. Security there is impressive, redundancy is too. There’s plenty of high-tech protection, but there’s some old-fashioned security too that harkens back to the middle ages.

There’s lots of interesting features about this data center, which Visa officially opened in 2009. When Visa opened it, the company said the new data center was becoming its second North American facility, giving it two synchronized centers that are each capable of supporting Visa's entire global payments volume in the event of a natural disaster or systems outage. There is instant fail-over technology between all four of Visa’s data centers on three continents. The data center can process hundreds of millions of transactions each day – and more than 10,000 transaction messages per second. It has more than 140,000 square feet of raised floor space.

But back to the moat, which is detailed in both aforementioned articles. Apparently, the moat is there to catch vehicles that might speed up the road leading to the OCE and can break through the hydraulic guards that can quickly be raised when unauthorized vehicles need to be stopped. The guards can stop a car going up to 50 miles per hour. If a car does break through, there’s a razor-sharp turn that can’t be navigated by that car, and the driver will most likely end up into the moat (posing as the drainage pond). Okay, so the drainage pond is no San Francisco Bay, which as we all know served as Alcatraz’ moat. But the OCE’s drainage pond and unwieldy road is still pretty impressive.

There are other security features that are pretty impressive, according to the articles. The facility is built to withstand earthquakes and hurricane force winds up to 170 mph (that means it can hold its own against a Cat 5). If there was an outage, there are two diesel generators totaling 4 megawatts of power that can keep the center running for nine days. Each of the pods have two rooms with 1,000 heavy-duty batteries each.

Getting in requires clearance from a guard station and a mobile security guy apparently riding around in a golf cart (according to the USA Today article). It also requires a photo, and fingerprints. The photos and fingerprints are necessary so the visitor can move through a security portal that requires a badge and biometric image of his or her fingerprint.

The Fast Company article reports that the OCE was built to meet the Uptime Institute’s standard of a Tier 4 center, the highest ranking of the institute’s classification system for evaluating data center infrastructure in terms of a business’ requirements for system availability.

Modern data center borrows medieval security tactics | ITworld

Modern data center borrows medieval security tactics | ITworld

Thursday, March 22, 2012

VCE and VMware: Virtualizing Your Infrastructure Just Got Easier datacenter dcim http://bit.ly/GNCAUe
Uptime Institute 7th Symposium, Featuring Digital Infrastructure Convergence and a Modular DataCenter Campus http://bit.ly/GG69pg DCIM

Wednesday, March 21, 2012

How data center managers can be prepared for the 10 most common surprises

1. Those high-density predictions finally are coming true: After rapid growth early in the century, projections of double-digit rack densities have been slow to come to fruition. Average densities hovered between 6.0 and 7.4 kW per rack from 2006 to 2009, but the most recent Data Center Users’ Group (DCUG) survey predicted average rack densities will reach 12.0 kW within three years. That puts a premium on adequate UPS capacity and power distribution as well as cooling to handle the corresponding heat output. 2. Data center managers will replace servers three times before they replace UPS or cooling systems: Server refreshes happen approximately every three years. Cooling and UPS systems are expected to last much longer—sometimes decades. That means the infrastructure organizations invest in today must be able to support—or, more accurately, scale to support—servers that may be two, three or even four generations removed from today’s models. What does that mean for today’s data center manager? It makes it imperative that today’s infrastructure technologies have the ability to scale to support future needs. Modular solutions can scale to meet both short- and long-term requirements. 3. Downtime is expensive: Everyone understands downtime is bad, but the actual costs associated with an unplanned outage are stunning. According to a Ponemon Institute study, an outage can cost an organization an average of about $5,000 per minute. That’s $300,000 in just an hour. The same study indicates the most common causes of downtime are UPS battery failure and exceeding UPS capacity. Avoid those problems by investing in the right UPS—adequately sized to support the load—and proactively monitoring and maintaining batteries. 4. Water and the data center do not mix – but we keep trying: The first part of this probably isn’t a surprise. Sensitive IT equipment does not respond well to water. However, the aforementioned Ponemon study indicates 35 percent of all unplanned outages are a result of some type of water incursion. These aren’t just leaky valves; in fact, many water-related outages are the result of a spilled coffee or diet soda. The fix: Check those valves, but more importantly, check the drinks at the door. 5. New servers use more power than old servers: Sever consolidation and virtualization can shrink server inventory by huge numbers, but that doesn’t exactly equate to huge energy savings. New virtualized servers, especially powerful blade servers, can consume four or five times as much energy as those from the preceding generation (although they usually do it more efficiently). The relatively modest savings at the end of that consolidation project may come as a surprise. There is no fix for this, but prepare for it by making sure the infrastructure is adequate to support the power and cooling needs of these new servers. 6. Monitoring is a mess: IT managers have more visibility into their data centers than ever before, but accessing and making sense of the data that comes with that visibility can be a daunting task. According to a survey of data center professionals, data center managers use, on average, at least four different software platforms to manage their physical infrastructure. Forty-one percent of those surveyed say they produce three or more reports for their supervisors every month, and 34 percent say it takes three hours or more to prepare those reports. The solution? Move toward a single monitoring and management platform. Today’s DCIM solutions can consolidate that information and proactively manage the infrastructure to improve energy and operational efficiency and even availability. 7. The IT guy is in charge of the building’s HVAC system: The gap between IT and Facilities is shrinking, and the lion’s share of the responsibility for both pieces is falling on the IT professionals. Traditionally, IT and data center managers have had to work through Facilities when they need more power or cooling to support increasing IT needs. That process is being streamlined, thanks in large part to those aforementioned DCIM solutions that increase visibility and control over all aspects of a building’s infrastructure. Forward-thinking data center managers are developing a DCIM strategy to help them understand this expansion of their roles and responsibilities. 8. That patchwork data center needs to be a quilt: In the past, data center managers freely mixed and matched components from various vendors because those systems worked together only tangentially. That is changing. The advent of increasingly intelligent, dynamic infrastructure technologies and monitoring and management systems has increased the amount of actionable data across the data center, delivering real-time modeling capabilities that enable significant operational efficiencies. IT and infrastructure systems still can work independently, but to truly leverage the full extent of their capabilities, integration is imperative. 9. Data center on demand is a reality: The days of lengthy design, order and deployment delays are over. Today there are modular, integrated, rapidly deployable data center solutions for any space. Integrated, virtually plug-and-play solutions that include rack, server and power and cooling can be installed easily in a closet or conference room. On the larger end, containerized data centers can be used to quickly establish a network or to add capacity to an existing data center. The solution to most problems is a phone call away. 10. IT loads vary – a lot: Many industries see extreme peaks and valleys in their network usage. Financial institutions, for example, may see heavy use during traditional business hours and virtually nothing overnight. Holiday shopping and tax seasons also can create unusual spikes in IT activity. Businesses depending on their IT systems during these times need to have the capacity to handle those peaks, but often can operate inefficiently during the valleys. A scalable infrastructure with intelligent controls can adjust to those highs and lows to ensure efficient operation. http://www.datacenterworld.com/preparing-for-10-common-data-center-surprise

Tuesday, March 13, 2012

Liquid Cooling: The Wave of the Present datacenter dataenters datacentre

Liquid Cooling: The Wave of the Present

 March 13, 2012 No Comments »
Liquid Cooling: The Wave of the Present

Cooling in most data centers uses air, and it does so for many practical reasons. Air is ubiquitous, it generally poses no danger to humans or equipment, it doesn’t conduct electricity, it’s fairly easy to move and it’s free. But air also falls short on a number of counts: for instance, it’s not very thermally efficient (i.e., it doesn’t hold much heat relative to liquids), so it’s impractical for cooling high-density implementations. Naturally, then, some companies are turning to water (or other liquids) as a means to increase efficiency and provide more-targeted cooling for high-power computing and similar situations. But will water wash away air as competition for cooling in the data center?

Why Water?

One of the main draws of liquid cooling is the greater ability of liquids (relative to air) to capture and hold heat—it takes more heat to warm a certain amount of water to a given temperature than to warm the same amount of air to the same temperature. Thus, a smaller amount of water can accomplish the same heat capture and removal as a relatively large amount of air, enabling a targeted cooling strategy (the entire data center need not be flooded to keep it sufficiently cool). And with the rising cost of energy and the growing power appetite of data centers, the greater energy efficiency of water is a temptation to companies.

For high-density implementations, air cooling is often simply insufficient—particularly when a whole-room cooling approach is used. In such cases, water (or, more generally, liquid) cooling offers an alternative in which not only can greater cooling efficiency be applied, it can be applied only where it is needed. That is, why cool an entire data center room when you can just cool, say, a particular high-density cabinet? And when individual cabinets are kept cool, they can also be placed much closer together, since air flow is less of a concern. Thus, liquid cooling can also enable more-efficient use of precious floor space.

Of course, no solution is ideal. Liquid cooling has its drawbacks, just as air does, and they’re worth noting. But it’s helpful to first review some liquid cooling solutions that are now in use in data centers.

Liquid Cooling Solutions

Liquids can serve as a medium for transporting heat in a number of different ways, ranging  from broader cooling of the entire computer room to targeted cooling of particular racks/cabinets or even particular servers. The most basic option is to use water just as the means of moving heat from the computer room to the outside environment. In a whole-room approach to cooling, computer room air handlers (CRAHs) use chilled water to provide the necessary cooling; the water then moves the collected heat through pipes out of the building, where it is released to the environment (in part through evaporation, which is helpful but also can consume large amounts of water). This cooling approach is similar to the use of computer room air conditioners (CRACs).

A more targeted approach involves supplying cool water to the rack or cabinet. In the case of enclosed cabinets, only the space surrounding the servers need be cooled—the remainder of the room is largely unexposed to the heat produced by the IT equipment. This approach is similar to the whole-room case using CRAHs, except that only small spaces (the interiors of the cabinets) are cooled.

Cooling can be targeted even more directly by integrating the liquid system into the servers themselves. For instance, Asetek provides a system in which the CPU is cooled by a self-contained liquid apparatus inside the server. Essentially, it’s a miniature version of the larger data center cooling system, except the interior of the server is the “inside” and the exterior is the “environment.” Of course, the heat must still be removed from the rack or cabinet—which can be handled by any of the above systems.

At the extreme end of liquid cooling are submersion-based systems, where servers are actually immersed in a dielectric (non-conducting) liquid. For example, Green Revolution Cooling offers an immersion system, CarnotJet, that uses refined mineral oil as the coolant. Servers are placed directly in the liquid, yielding the benefits of water cooling without the hassles of containment, since mineral oil is non-conducting and won’t short out electronics. Using this system requires several fairly simple modifications, including use of a special coating on hard drives (since they cannot function in liquid) and removal of fans.

Going a step further, some solutions even use liquid inside the server only, avoiding the need for vats of liquid. In either case, however, the heat must still be removed from the server or liquid vat.

Now for the Downsides

Depending on the particular implementation, liquid cooling obviously poses some risks and other downsides. Air is convenient, partially because of its ubiquity; no one worries about an air leak. A water leak, on the other hand—particularly in a data center—is a potential disaster. Moving water requires its own infrastructure, and although moving air does as well, a leak from an air duct is much less problematic than a leak from a water pipe. Furthermore, water pipes can produce condensation, which can be a problem even in a system with no leaks. And with more-stringent requirements on infrastructure comes greater cost: infrastructure for liquid cooling requires greater capital expenses compared with air-based systems.

Another concern is water consumption. Evaporative cooling converts liquid water to water vapor, meaning the data center must have a steady supply. Furthermore, blowdown (liquid water released to the environment) can be problematic—not because it is polluted (it shouldn’t be), but simply because it’s warm. When warm water drains into a river or stream, for instance, it can damage the existing ecosystem.

Proponents of liquid cooling approaches cite the energy efficiency improvements that their systems can provide—estimates range as high as a 50% cut in total data center energy consumption. These numbers, of course, depend on the situation (where the particular data center starts and what type of system it installs), but the returns only begin after the infrastructure has been paid off. Also, given the greater infrastructure needs of liquid cooling, maintenance may be more of a concern. Furthermore, in cases where water is consumed by the cooling process, some energy efficiency comes at the cost of a high water bill.

Liquid Cooling to Stay

In some cases, particularly in lower-density data centers, air cooling may be the best option, if for no other reason than it is simpler and lower in cost to implement. Questions linger regarding at what point liquid cooling becomes financially beneficial (“Data Center Myth Disproved—Water Cooling Cost-effective Below 6kW/rack”). For high-density configurations, however, liquid may be the only viable option, and as high-power computing grows, so will an emphasis on liquid cooling. Air cooling simply has too many benefits—many of which center on its simplicity—to expect that liquid cooling will one day be the only cooling approach. Nevertheless, liquid/water cooling has an established position in the data center (particularly the high-density data center) that will grow over time. The only question is how much of the market will implement some form of liquid cooling, and which types of liquid cooling solutions will be the most prevalent.

Liquid Cooling: The Wave of the Present datacenter dataenters datacentre http://bit.ly/y2ek7v

Data Center Pulse IDs Top 10 Industry Challenges

  • Can IT and facilities work together to make data centers more effective? Will modular designs and renewable energy gain a greater foothold? Those are among the key challenges facing the industry, according to Data Center Pulse, which has released its annual “Top 10″ list of priorities for data center users.

    The Top 10 was released in conjunction with last week’s Data Center Pulse Summit 2012 , which was held along with The Green Grid Technical Forum in San Jose, Calif. The 2012 list features some substantial changes from 2011 and previous years, as end users focus on applying new approaches to data center design and management. Six entries in the Top 10 are new, including the number one challenge. Here’s the list:


    2. Simple, top-level efficiency metric
    : Data Center Pulse has proposed a new metric, Service Efficiency, which was reviewed by 50 DCP members during a  two-and-a-half hour discussion during the Summit. The proposal is a framework intended to show the actual MPG (miles per gallon) for work done in a data center. details of the metric will be released publicly this summer. For more background, see this video.1. Facilities and IT Alignment
    : One of the oldest problems in the data center sector is the disconnect between facilities and IT, which effectively separates workloads from cost. This issue is focused on end users rather than vendors, who have traditionally been the intended audience for the Top 10. Large organizations like Microsoft, Yahoo, eBay, Google and Facebook have eliminated many of the bariers between facilities and IT, a trend that needs to penetrate the rest of the industry, Data Center Pulse says.

    3. Standardized Stack Framework: One of Data Center Pulse’s key initiatives has been the development of its Data Center Stack, similar to the OSI Model that divides network architecture into seven levels. The standardized Data Center Stack segments operations into seven layers to help align facilities and IT. Version 2.1 has been released, and DCP intends to continue to work with other industry groups like The Green Grid to advance the stack.

    4. Move from Availability to Resiliency: With many web applications now spread across multiple facilities or regions, the probability of failure in full systems has become more important than specific site uptime, according to DCP. That means a shift toward focusing on application resiliency, as opposed to the performance of specific buildings. Aligning applications to availability requirements can lowers costs, even as it increases the uptime of a service.

    5. Renewable Power Options: The lack of cost-effective renewable power is a growing problem for data centers, which use enormous volumes of electricity. Data Center Pulse sees potential for progress in approaches that have worked in Europe, where renewable power is more readily available than in the U.S. These include focusing business development opportunities at the state level, and encouraging alignment between end users, utilities, government and developers.

    6. “Containers” vs. Brick & Mortar: Are containers and modular designs a viable option? As more companies adopt modular approaches, Data Center Pulse notes that “one size does not fit all, but it is fitting quite a bit more.” For many companies, the best strategy is likely to be a hybrid approach that combines modular designs with traditional data center space for different workloads.

    7. Hybrid DC Designs: The hybrid approach applies to modular designs, but also the Tier system and the redundancy of mechanical and electrical systems. A growing number of data centers are saving money by segmenting facilities into zones with different levels of redundancy appropriate to defined workloads. “Modularity, multi-tier, application resiliency, standardization and supply chain are driving different discussions,” notes DCP. “Traditional approach to designing and building data centers may be missing business opportunities.”

    8. Liquid Cooled IT Equipment Options: For many IT operations, the goal is to increase the amount of work done per watt. That is leading to higher power densities, which is testing the limits of air cooling. As more data centers come to resemble high performance computing (HPC) installations, DCP says direct liquid cooling will become a more attractive option.

    9. Free Cooling “Everywhere”: The recent Green Grid case study on theeBay Project Mercury data center demonstrates that 100 percent free cooling is possible in places like Phoenix, where the temperature can exceed 100 degrees in the summer. Data Center Pulse says that as data center designers challenge assumptions, new designs and products will support free cooling everywhere.

    10. Converged Infrastructure Intelligence: Data center operators now need to treat their infrastructure as a single system, and need to be able to measure and control many elements of their facilities. That means converging the infrastructure instrumentation and control systems, and connecting them to IT systems. Data center infrastructure management (DCIM) is part of this trend, but it will also be important to standardize connections and protocols to connect components, according to Data Center Pulse.