Monday, March 26, 2012
Modern data center borrows medieval security tactics | ITworld
March 26, 2012, 3:45 PM — Its location cannot be disclosed (it is somewhere on the eastern seaboard), has a drainage pond that acts as a moat, and it has seven 20,000-square-foot areas (or pods), all connected.
It is Visa’s data center, featured in this article on USA Today. A good read, for sure.
Fast Company wrote about the data center – known as OCE, or Operations Center East – last year. That article is equally interesting, and equally a good read.
Visits to the data center are reportedly rare. Visa, after all, has to keep safe a gazillion credit card transactions. Security there is impressive, redundancy is too. There’s plenty of high-tech protection, but there’s some old-fashioned security too that harkens back to the middle ages.
There’s lots of interesting features about this data center, which Visa officially opened in 2009. When Visa opened it, the company said the new data center was becoming its second North American facility, giving it two synchronized centers that are each capable of supporting Visa's entire global payments volume in the event of a natural disaster or systems outage. There is instant fail-over technology between all four of Visa’s data centers on three continents. The data center can process hundreds of millions of transactions each day – and more than 10,000 transaction messages per second. It has more than 140,000 square feet of raised floor space.
But back to the moat, which is detailed in both aforementioned articles. Apparently, the moat is there to catch vehicles that might speed up the road leading to the OCE and can break through the hydraulic guards that can quickly be raised when unauthorized vehicles need to be stopped. The guards can stop a car going up to 50 miles per hour. If a car does break through, there’s a razor-sharp turn that can’t be navigated by that car, and the driver will most likely end up into the moat (posing as the drainage pond). Okay, so the drainage pond is no San Francisco Bay, which as we all know served as Alcatraz’ moat. But the OCE’s drainage pond and unwieldy road is still pretty impressive.
There are other security features that are pretty impressive, according to the articles. The facility is built to withstand earthquakes and hurricane force winds up to 170 mph (that means it can hold its own against a Cat 5). If there was an outage, there are two diesel generators totaling 4 megawatts of power that can keep the center running for nine days. Each of the pods have two rooms with 1,000 heavy-duty batteries each.
Getting in requires clearance from a guard station and a mobile security guy apparently riding around in a golf cart (according to the USA Today article). It also requires a photo, and fingerprints. The photos and fingerprints are necessary so the visitor can move through a security portal that requires a badge and biometric image of his or her fingerprint.
The Fast Company article reports that the OCE was built to meet the Uptime Institute’s standard of a Tier 4 center, the highest ranking of the institute’s classification system for evaluating data center infrastructure in terms of a business’ requirements for system availability.
Thursday, March 22, 2012
Wednesday, March 21, 2012
How data center managers can be prepared for the 10 most common surprises
Tuesday, March 13, 2012
Liquid Cooling: The Wave of the Present datacenter dataenters datacentre
Cooling in most data centers uses air, and it does so for many practical reasons. Air is ubiquitous, it generally poses no danger to humans or equipment, it doesn’t conduct electricity, it’s fairly easy to move and it’s free. But air also falls short on a number of counts: for instance, it’s not very thermally efficient (i.e., it doesn’t hold much heat relative to liquids), so it’s impractical for cooling high-density implementations. Naturally, then, some companies are turning to water (or other liquids) as a means to increase efficiency and provide more-targeted cooling for high-power computing and similar situations. But will water wash away air as competition for cooling in the data center?
Why Water?
One of the main draws of liquid cooling is the greater ability of liquids (relative to air) to capture and hold heat—it takes more heat to warm a certain amount of water to a given temperature than to warm the same amount of air to the same temperature. Thus, a smaller amount of water can accomplish the same heat capture and removal as a relatively large amount of air, enabling a targeted cooling strategy (the entire data center need not be flooded to keep it sufficiently cool). And with the rising cost of energy and the growing power appetite of data centers, the greater energy efficiency of water is a temptation to companies.
For high-density implementations, air cooling is often simply insufficient—particularly when a whole-room cooling approach is used. In such cases, water (or, more generally, liquid) cooling offers an alternative in which not only can greater cooling efficiency be applied, it can be applied only where it is needed. That is, why cool an entire data center room when you can just cool, say, a particular high-density cabinet? And when individual cabinets are kept cool, they can also be placed much closer together, since air flow is less of a concern. Thus, liquid cooling can also enable more-efficient use of precious floor space.
Of course, no solution is ideal. Liquid cooling has its drawbacks, just as air does, and they’re worth noting. But it’s helpful to first review some liquid cooling solutions that are now in use in data centers.
Liquid Cooling Solutions
Liquids can serve as a medium for transporting heat in a number of different ways, ranging from broader cooling of the entire computer room to targeted cooling of particular racks/cabinets or even particular servers. The most basic option is to use water just as the means of moving heat from the computer room to the outside environment. In a whole-room approach to cooling, computer room air handlers (CRAHs) use chilled water to provide the necessary cooling; the water then moves the collected heat through pipes out of the building, where it is released to the environment (in part through evaporation, which is helpful but also can consume large amounts of water). This cooling approach is similar to the use of computer room air conditioners (CRACs).
A more targeted approach involves supplying cool water to the rack or cabinet. In the case of enclosed cabinets, only the space surrounding the servers need be cooled—the remainder of the room is largely unexposed to the heat produced by the IT equipment. This approach is similar to the whole-room case using CRAHs, except that only small spaces (the interiors of the cabinets) are cooled.
Cooling can be targeted even more directly by integrating the liquid system into the servers themselves. For instance, Asetek provides a system in which the CPU is cooled by a self-contained liquid apparatus inside the server. Essentially, it’s a miniature version of the larger data center cooling system, except the interior of the server is the “inside” and the exterior is the “environment.” Of course, the heat must still be removed from the rack or cabinet—which can be handled by any of the above systems.
At the extreme end of liquid cooling are submersion-based systems, where servers are actually immersed in a dielectric (non-conducting) liquid. For example, Green Revolution Cooling offers an immersion system, CarnotJet, that uses refined mineral oil as the coolant. Servers are placed directly in the liquid, yielding the benefits of water cooling without the hassles of containment, since mineral oil is non-conducting and won’t short out electronics. Using this system requires several fairly simple modifications, including use of a special coating on hard drives (since they cannot function in liquid) and removal of fans.
Going a step further, some solutions even use liquid inside the server only, avoiding the need for vats of liquid. In either case, however, the heat must still be removed from the server or liquid vat.
Now for the Downsides
Depending on the particular implementation, liquid cooling obviously poses some risks and other downsides. Air is convenient, partially because of its ubiquity; no one worries about an air leak. A water leak, on the other hand—particularly in a data center—is a potential disaster. Moving water requires its own infrastructure, and although moving air does as well, a leak from an air duct is much less problematic than a leak from a water pipe. Furthermore, water pipes can produce condensation, which can be a problem even in a system with no leaks. And with more-stringent requirements on infrastructure comes greater cost: infrastructure for liquid cooling requires greater capital expenses compared with air-based systems.
Another concern is water consumption. Evaporative cooling converts liquid water to water vapor, meaning the data center must have a steady supply. Furthermore, blowdown (liquid water released to the environment) can be problematic—not because it is polluted (it shouldn’t be), but simply because it’s warm. When warm water drains into a river or stream, for instance, it can damage the existing ecosystem.
Proponents of liquid cooling approaches cite the energy efficiency improvements that their systems can provide—estimates range as high as a 50% cut in total data center energy consumption. These numbers, of course, depend on the situation (where the particular data center starts and what type of system it installs), but the returns only begin after the infrastructure has been paid off. Also, given the greater infrastructure needs of liquid cooling, maintenance may be more of a concern. Furthermore, in cases where water is consumed by the cooling process, some energy efficiency comes at the cost of a high water bill.
Liquid Cooling to Stay
In some cases, particularly in lower-density data centers, air cooling may be the best option, if for no other reason than it is simpler and lower in cost to implement. Questions linger regarding at what point liquid cooling becomes financially beneficial (“Data Center Myth Disproved—Water Cooling Cost-effective Below 6kW/rack”). For high-density configurations, however, liquid may be the only viable option, and as high-power computing grows, so will an emphasis on liquid cooling. Air cooling simply has too many benefits—many of which center on its simplicity—to expect that liquid cooling will one day be the only cooling approach. Nevertheless, liquid/water cooling has an established position in the data center (particularly the high-density data center) that will grow over time. The only question is how much of the market will implement some form of liquid cooling, and which types of liquid cooling solutions will be the most prevalent.
Data Center Pulse IDs Top 10 Industry Challenges
-
Can IT and facilities work together to make data centers more effective? Will modular designs and renewable energy gain a greater foothold? Those are among the key challenges facing the industry, according to Data Center Pulse, which has released its annual “Top 10″ list of priorities for data center users.
The Top 10 was released in conjunction with last week’s Data Center Pulse Summit 2012 , which was held along with The Green Grid Technical Forum in San Jose, Calif. The 2012 list features some substantial changes from 2011 and previous years, as end users focus on applying new approaches to data center design and management. Six entries in the Top 10 are new, including the number one challenge. Here’s the list:
2. Simple, top-level efficiency metric: Data Center Pulse has proposed a new metric, Service Efficiency, which was reviewed by 50 DCP members during a two-and-a-half hour discussion during the Summit. The proposal is a framework intended to show the actual MPG (miles per gallon) for work done in a data center. details of the metric will be released publicly this summer. For more background, see this video.1. Facilities and IT Alignment: One of the oldest problems in the data center sector is the disconnect between facilities and IT, which effectively separates workloads from cost. This issue is focused on end users rather than vendors, who have traditionally been the intended audience for the Top 10. Large organizations like Microsoft, Yahoo, eBay, Google and Facebook have eliminated many of the bariers between facilities and IT, a trend that needs to penetrate the rest of the industry, Data Center Pulse says.3. Standardized Stack Framework: One of Data Center Pulse’s key initiatives has been the development of its Data Center Stack, similar to the OSI Model that divides network architecture into seven levels. The standardized Data Center Stack segments operations into seven layers to help align facilities and IT. Version 2.1 has been released, and DCP intends to continue to work with other industry groups like The Green Grid to advance the stack.
4. Move from Availability to Resiliency: With many web applications now spread across multiple facilities or regions, the probability of failure in full systems has become more important than specific site uptime, according to DCP. That means a shift toward focusing on application resiliency, as opposed to the performance of specific buildings. Aligning applications to availability requirements can lowers costs, even as it increases the uptime of a service.
5. Renewable Power Options: The lack of cost-effective renewable power is a growing problem for data centers, which use enormous volumes of electricity. Data Center Pulse sees potential for progress in approaches that have worked in Europe, where renewable power is more readily available than in the U.S. These include focusing business development opportunities at the state level, and encouraging alignment between end users, utilities, government and developers.
6. “Containers” vs. Brick & Mortar: Are containers and modular designs a viable option? As more companies adopt modular approaches, Data Center Pulse notes that “one size does not fit all, but it is fitting quite a bit more.” For many companies, the best strategy is likely to be a hybrid approach that combines modular designs with traditional data center space for different workloads.
7. Hybrid DC Designs: The hybrid approach applies to modular designs, but also the Tier system and the redundancy of mechanical and electrical systems. A growing number of data centers are saving money by segmenting facilities into zones with different levels of redundancy appropriate to defined workloads. “Modularity, multi-tier, application resiliency, standardization and supply chain are driving different discussions,” notes DCP. “Traditional approach to designing and building data centers may be missing business opportunities.”
8. Liquid Cooled IT Equipment Options: For many IT operations, the goal is to increase the amount of work done per watt. That is leading to higher power densities, which is testing the limits of air cooling. As more data centers come to resemble high performance computing (HPC) installations, DCP says direct liquid cooling will become a more attractive option.
9. Free Cooling “Everywhere”: The recent Green Grid case study on theeBay Project Mercury data center demonstrates that 100 percent free cooling is possible in places like Phoenix, where the temperature can exceed 100 degrees in the summer. Data Center Pulse says that as data center designers challenge assumptions, new designs and products will support free cooling everywhere.
10. Converged Infrastructure Intelligence: Data center operators now need to treat their infrastructure as a single system, and need to be able to measure and control many elements of their facilities. That means converging the infrastructure instrumentation and control systems, and connecting them to IT systems. Data center infrastructure management (DCIM) is part of this trend, but it will also be important to standardize connections and protocols to connect components, according to Data Center Pulse.