Monday, April 30, 2012

Driving Under the Limit: Data Center Practices That Mitigate Power Spikes

 April 30, 2012

 

Every server in a data center runs on an allotted power cap that is programmed to withstand the peak-hour power consumption level. When an unexpected event causes a power spike, however, data center managers can be faced with serious problems. For example, in the summer of 2011, unusually high temperatures in Texas created havoc in data centers. The increased operation of air conditioning units affected data center servers that were already running close to capacity.

Preparedness for unexpected power events requires the ability to rapidly identify the individual servers at risk of power overload or failure. A variety of proactive energy management best practices can not only provide insights into the power patterns leading up to problematic events, but can offer remedial controls that avoid equipment failures and service disruptions.

Best Practice: Gaining Real-Time Visibility

Dealing with power surges requires a full understanding of your nominal data center power and thermal conditions. Unfortunately, many facilities and IT teams have only minimal monitoring in place, often focusing solely on return air temperature at the air-conditioning units.

The first step toward efficient energy management is to take advantage of all the power and thermal data provided by today’s hardware. This includes real-time server inlet temperatures and power consumption data from rack servers, blade servers, and the power-distribution units (PDUs) and uninterrupted power supplies (UPSs) related to those servers. Data center energy monitoring solutions are available for aggregating this hardware data and for providing views of conditions at the individual server or rack level or for user-defined groups of devices.

Unlike predictive models that are based on static data sets, real-time energy monitoring solutions can uncover hot spots and computer-area air handler (CRAH) failures early, when proactive actions can be taken.

By aggregating server inlet temperatures, an energy monitoring solution can help data center managers create real-time thermal maps of the data center. The solutions can also feed data into logs to be used for trending analysis as well as in-depth airflow studies for improving thermal profiles and for avoiding over- or undercooling. With adequate granularity and accuracy, an energy monitoring solution makes it possible to fine-tune power and cooling systems, instead of necessitating designs to accommodate the worst-case or spike conditions.

Best Practice: Shifting From Reactive to Proactive Energy Management

Accurate, real-time power and thermal usage data also makes it possible to set thresholds and alerts, and it introduce controls that enforce policies for optimized service and efficiencies. Real-time server data provides immediate feedback about power and thermal conditions that can affect server performance and ultimately end-user services.

Proactively identifying hot spots before they reach critical levels allows data center managers to take preventative actions and also creates a foundation for the following:

  •  Managing and billing for services based on actual energy use
  • Automating actions relating to power management in order to minimize the impact on IT or facilities teams
  • Integrating data center energy management with other data center and facilities management consoles.

Best Practice: Non-Invasive Monitoring

To avoid affecting the servers and end-user services, data center managers should look for energy management solutions that support agentless operation. Advanced solutions facilitate integration, with full support for Web Services Description Language (WSDL) APIs, and they can coexist with other applications on the designated host server or virtual machine.

Today’s regulated data centers also require that an energy management solution offer APIs designed for secure communications with managed nodes.

Best Practice: Holistic Energy Optimization

Real-time monitoring provides a solid foundation for energy controls, and state-of-the-art energy management systems provide enable dynamic adjustment of the internal power states of data center servers. The control functions support the optimal balance of server performance and power—and keep power under the cap to avoid spikes that would otherwise exceed equipment limits or energy budgets.

Intelligent aggregation of data center power and thermal data can be used to drive optimal power management policies across servers and storage area networks. In real-world use cases, intelligent energy management solutions are producing 20–40 percent reductions in energy waste.

These increases in efficiency ameliorate the conditions that may lead to power spikes, and they also enable other high-value benefits including prolonged business continuity (by up to 25 percent) when a power outage occurs. Power can also be allocated on a priority basis during an outage, giving maximum protection to business-critical services.

Intelligent power management for servers can also dramatically increase rack density without exceeding existing rack-level power caps. Some companies are also using intelligent energy management approaches to introduce power-based metering and energy cost charge-backs to motivate conservation and more fairly assign costs to organizational units.

Best Practice: Decreasing Data Center Power Without Affecting Performance

A crude energy management solution might mitigate power surges by simply capping the power consumption of individual servers or groups of servers. Because performance is directly tied to power, an intelligent energy management solution dynamically balances power and performance in accordance with the priorities set by the particular business.

The features required for fine-tuning power in relation to server performance include real-time monitoring of actual power consumption and the ability to maintain maximum performance by dynamically adjusting the processor operating frequencies. This requires a tightly integrated solution that can interact with the server operating system or hypervisor using threshold alerts.

Field tests of state-of-the-art energy management solutions have proven the efficacy of an intelligent approach for lowering server power consumption by as much as 20 percent without reducing performance. At BMW Group,[1]for example, a proof-of-concept exercise determined that an energy management solution could lower consumption by 18 percent and increase server efficiency by approximately 19 percent.

Similarly, by adjusting the performance levels, data center managers can more dramatically lower power to mitigate periods of power surges or to adjust server allocations on the basis of workloads and priorities.

Conclusions

Today, the motivations for avoiding power spikes include improving the reliability of data center services and curbing runaway energy costs. In the future, energy management will likely become more critical with the consumerization of IT, cloud computing and other trends that put increased service—and, correspondingly, energy—demands on the data center.

Bottom line, intelligent energy management is a critical first step to gaining control of the fastest-increasing operating cost for the data center. Plus, it puts a data center on a transition path towards more comprehensive IT asset management. Besides avoiding power spikes, energy management solutions provide in-depth knowledge for data center “right-sizing” and accurate equipment scheduling to meet workload demands.

Power data can also contribute to more-efficient cooling and air-flow designs and to space analysis for site expansion studies. Power is at the heart of optimized resource balancing in the data center; as such, the intelligent monitoring and management of power typically yields significant ROI for best-in-class energy management technology.

Friday, April 27, 2012

Netherlands next on Colt’s data center list

Modular build, made in UK, strategically placed to service European customers

27 April 2012 by Penny Jones - DatacenterDynamics

Inside_a_colt_module_0
Colt is making good on its promise to expand data center operations in Europe, allowing local access to its Colt services including Colt’s private cloud, with a new data center facility in the Netherlands.

Its first Modular Data Center facility in the country will sit on a 39,000 sq m site and will be ready for operation in 2013. Colt has already secured 32MVA power for the site.

Colt said it will build out its data center in 2,000 sq m allotments, with 20 halls planned so far, bringing capacity to 10,000 sq m. It will start with just one hall, and has already secured an anchor tenant for the carrier-neutral site.

The modules themselves, which come complete with power and cooling, will be constructed and tested in the UK before being transported to the Netherlands site.

Colt claims its Modular Data Centres can be delivered within four months. “We’re not talking about ‘container’ based data centers either: we mean large-scale, traditional data centers built in a radical new way, using standardized manufacturing techniques and components on a production line,” Colt said on its website.

It can provide these modules in 125 sq m, 250 sq m and 375 sq m customized for client requirements.

Colt Data Centre Services Executive VPO Bernard Geoghegan said the Netherlands offers Colt a strategic placing, between four major European cities.

“The site offers an ideal location for our first modular data center deployment in the Netherlands,” Geoghegan said.

“The expansion capability of the site together with our unique modular approach means our customers will be able to scale and add additional capacity as demand requires.”

At VMworld in Copenhagen in 2011, Colt Principal Cloud Specialist Steve Hughes said Colt would be expanding its footprint in Europe by the end of the 2012 to help companies overcome local regulations in regards to data storage and privacy.

"If it matters where your data resides, if you have latency issues for applications running between your data center and the cloud data center, if you do not necessarily want to use the internet for your data – Colt’s enterprise cloud services give you those options," Hughes said.

"Federated cloud computing in some form will be the future of cloud computing  – whether it is service providers sharing workload between strategically located data centers or co-operating to provide interoperable services."

Colt’s newsest data center will be its 20th in Europe, where it operates in 39 major cities with an overall footprint of 28,000 sq m.

Wednesday, April 25, 2012

Cold Aisle Containment System Performance Simulation

By: Michael Potts
April 25th, 2012

In an attempt to reduce inlet temperatures, BayCare Health System in Tampa Florida installed a cold aisle containment system (CACS) in a section of their data center. Results were varied, with temperatures improving in some areas, but actually increasing in others. In order to understand these results, airflow management solutions provider Eaton simulated the data center’s performance using Future Facilities’ 6SigmaDC computational fluid dynamics (CFD). The simulation’s results matched those of the physical data center. With the information at hand, it was determined that CFD software could be used to diagnose the data centers cooling problems.


Future Facilities Website: http://www.futurefacilities.com/

This paper from Eaton details the results of their CFD diagnosis of the BayCare facility, describing the process of analysis in depth, as well as offering solutions to the cooling infrastructure. First, the process of cold aisle containment installation is outlined, offering details from both the simulation and study of the physical data center. Next, it explains Eaton’s performance simulation and measurement of the facility’s function after installation, mimicking the center’s airflow, device models and locations, as well as temperature. Lastly, a framework of the full diagnoses is presented, offering conclusions to the unexpected temperature increases.

Learn the full process of data center diagnosis in this detailed simulation. Click here to download this paper from Eaton on the diagnosis of the incorrectly performing cold aisle containment system at BayCare Health System.

11080763-ffl-logo

Data Center Executives Must Address Many Issues in 2012

Analyst(s): Mike Chuba

VIEW SUMMARY

Seemingly insatiable demand for new workloads and services at a time when most budgets are still constrained is the challenge of most data center executives. We look at the specific areas they identified going into 2012.

Overview

Data center executives are caught in an awkward phase of the slow economic recovery, as they try to support new initiatives from the business without a commensurate increase in their budgets. Many will need to improve the efficiency of their workloads and infrastructure to free up money to support these emerging initiatives.

Key Findings

  • Data center budgets are not growing commensurate with demand.
  • Expect an 800% growth in data over the next five years, with 80% of it being unstructured.
  • Tablets will augment desktop and laptop computers, not replace them.
  • Data centers can consume 100 times more energy than the offices they support.
  • The cost of power is on par with the cost of the equipment.

Recommendations

  • It is not the IT organization's job to arrest the creation or proliferation of data. Rather, data center managers need to focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.
  • Focus short term on cooling, airflow and equipment placement to optimize data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency.
  • Put in place security, data storage and usage guidelines for tablets and other emerging form factors in the short term, while deciding on your long-term objectives for support.
  • Use a business impact analysis to determine when, where and why to adopt cloud computing.

What You Need to Know

New workloads that are key to enterprise growth, latent demand for existing workloads as the general economy recovers, increased regulatory demands and the explosion in data growth all pose challenges for data center executives at a time when the budget is not growing commensurate with demand. Storage growth continues unabated. It is not unusual to hear sustained growth rates of 40% or more per year. To fund this growth, most organizations will have to reallocate their budgets from other legacy investment buckets. At the same time, they must focus on storage optimization to manage demand, availability and efficiency.

Analysis

"Nothing endures but change" is a quote attributed to Heraclitus, who lived over 2,500 years ago. However, his words seem applicable to the data center executive today. Pervasive mobility, a business environment demanding access to anything, anytime, anywhere and the rise of alternative delivery models, such as cloud computing, have placed new pressures on the infrastructure and operations (I&O) organization for support and speed. At the same time, a fitful economic environment has not loosened the budget purse strings sufficiently to fund all the new initiatives that many I&O organizations have identified.

This challenge of supporting today's accelerated pace of change, and delivering the efficiency, agility and quality of services their business needs to succeed was top of mind for the more than 2,600 data center professionals gathered in Las Vegas on 5 December to 8 December 2011 for the annual Gartner U.S. Data Center Conference. It was a record turnout for this annual event, now in its 30th year. Our conference theme, "Heightened Risk, Unbounded Opportunities, Managing Complexity in the Data Center," spoke to the difficult task our attendees face while addressing the new realities and merging business opportunities at a time when the economic outlook is still uncertain. The data center is being reshaped, as the transformation of IT into a service business has begun.

Our agenda reflected the complex, interrelated challenges confronting attendees. Attendance was particularly strong for the cloud computing and data center track sessions, followed by the storage, virtualization and IT operations track. The most popular analyst-user roundtables focused on these topics, and analysts in these spaces were in high demand for one-on-one meetings. We believe that the best-attended sessions and the results of the surveys conducted at the conference represent a reasonable benchmark for the kinds of issues that organizations will be dealing with in 2012.

We added a new track this year focused on the impact of mobility on I&O. The rapid proliferation of smart devices, such as tablets and smartphones, is driving dramatic changes in business and consumer applications and positively impacting bottom-line results. Yet, I&O plays a critical role in supporting these applications rooted in real-time access to corporate data anytime and anywhere and in any context, while still providing traditional support to the existing portfolio of applications and devices. As the next billion devices wanting access to corporate infrastructure are deployed, I&O executives have an opportunity to exhibit leadership and innovation — from contributing to establishing corporate standards, to anticipating the impact on capacity planning, to minimizing risk.

Electronic interactive polling is a significant feature of the conference, allowing attendees to get instantaneous feedback on what their peers are doing. The welcome address posed a couple of questions that set the tone for the conference. Attendees were first asked how their 2012 I&O budgets compared with their previous years' budgets (see Figure 1).

Figure 1. Budget Change in Coming Year vs. Current Year Spending
Figure 1. Budget Change in Coming Year vs. Current Year Spending

Source: Gartner (January 2012)

Comparing year-over-year data, we find almost identical numbers reporting budgetary growth (42%) and reduced budgets (26% vs. 25%). The most recent results reflect a gradual, but still challenging, economic climate. While hardly robust, it is a marked improvement from the somber mood that most end-user organizations were in at the end of 2008 and entering 2009. Subsequent track sessions that focused on cost optimization strategies and best practices were universally well attended throughout the week.

Now, modest budget changes may not be enough to sustain current modes of IT operations, let alone support emerging business initiatives. Organizations need to continue to look closely at improving efficiencies and pruning legacy applications that are on the back side of the cost-benefit equation, to free up the budget and lay the groundwork to support emerging workloads/applications.

The second issue we raised in the opening session was for attendees to identify the most significant data center challenge they will face in 2012, compared with previous years (see Figure 2; note that the voting options changed from year to year).

Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)
Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)

Source: Gartner (January 2012)

What was interesting was the more balanced distribution across the options. For those who have the charter to manage the storage environment, managing storage growth is an extremely challenging issue.

Top Five Challenges

NO. 1: DATA GROWTH

Data growth continues unabated, leaving IT organizations struggling to deal with how to fund the necessary storage capacity, how to manage these devices if they can afford them, and how they can archive and back up this data. Managing and storing massive volumes of complex data to support real-time analytics is increasingly becoming a requirement for many organizations, driving the need for not just capacity, but also performance. New technologies, architectures and deployment models can enable significant changes in storage infrastructure and management best practices now and in coming years, and assist in addressing these issues. We believe that it is not the job of IT to arrest the creation or proliferation of data. Rather, IT should focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.

Tactically prioritize a focus on deleting data that has outlived its usefulness, and exploit technologies that allow for the reduction of redundant data.

NO. 2: DATA CENTER SPACE, POWER AND/OR COOLING

It is not surprising that data center space, power and/or cooling was identified as the second biggest challenge by our attendees. Data centers can consume 100 times more energy than the offices they support, which draws more budgetary attention in uncertain times. During the past five years, the power demands of equipment have grown significantly, imposing enormous pressures on the capacity of data centers that were built five or more years ago. Data center managers are grappling with cost, technology, environmental, people and location issues, and are constantly looking for ways to deliver a highly available, secure, flexible server infrastructure as the foundation for the business's mission-critical applications. On top of this is the building pressure to create a green environment. Our keynote interview with Frank Frankovsky, director of hardware design and supply chain at Facebook, drew considerable interest because of some of the novel approaches that company was taking to satisfy its rather unique computing requirements.

We recommend that data center executives focus short term on cooling, airflow and equipment placement to optimize their data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency. We believe that the decline in priority shown in the survey results reflects the fact that organizations have been focusing on improved efficiency of their data centers. Changes are being implemented and results are being achieved.

NO. 3: PRIVATE/PUBLIC CLOUD STRATEGY

Developing a private/public cloud strategy was the third most popular choice as the top priority, and mirrors the results we have seen in Gartner's separate surveys regarding the top technology priorities of CIOs. With many organizations well on their way to virtualized infrastructures, many are now either actively moving toward, or being pressured to move toward, cloud-based environments. Whether it is public, private or some hybrid version of cloud, attendees' questions focused on where do you go, how do you get there, and how fast should you move toward cloud computing.

We recommend that organizations develop a business impact analysis to determine when, where and why to adopt cloud computing. Ascertain where migrating or enhancing applications can deliver value, and look for the innovative applications that could benefit from unique cloud capabilities.

NO. 4 AND NO. 5: BUSINESS NEEDS

"Modernizing of our legacy applications" was fourth as the greatest challenge, and "Identifying and translating business requirements" was fifth and, in many ways, both relate to similar concerns. Meeting business priorities; aligning with shifts in the business; and bringing much-needed agility to legacy applications that might require dramatic shifts in architectures, processes and skill sets were common concerns among Data Center Conference attendees, in general.

We believe virtualization's decline as a top challenge reflects the comfort level that attendees have in the context of x86 server virtualization, and most of this conference's attendees are well down that path — primarily with VMware, but increasingly with other vendors as well. Our clients see the private cloud as an extension of their virtualization efforts; thus, interest in virtualization isn't waning, but is evolving to private cloud computing. Now is a good time to evaluate your virtualization "health" — processes, management standards and automation readiness. For many organizations, it is an appropriate time to benchmark their current virtualization approach against competitors and alternate providers, and broaden their virtualization initiatives beyond just the servers and across the portfolio — desktop, storage, applications, etc.

This year promises to be one of further market disruption and rapid evolution. Vendor strategies will be challenged and new paradigms will continue to emerge. To stay ahead of the industry curve, plan to join your peers at the 2012 U.S. Data Center Conference on 3 December to 6 December in Las Vegas.

6 Questions for Your Next Data Center Provider

William Dougherty is VP of Information Technology for RagingWire Data Centers, with over 15 years designing, building, securing and operating high-availability computing systems. He tweets regularly on industry issues @bdognet.

billdoughertyWilliam Dougherty
Raging Wire

Tour any three data centers and you’ll be left scratching your head trying to differentiate between them. As a result, price and proximity become the primary decision points in an otherwise seemingly level playing field. However, there are crucial differences between data center providers that can drive up both your costs and your IT downtime risk.  Ask the following questions of your potential data center provider and you will keep your costs and your downtime risk as low as possible.

Which components of the data center facility are both concurrently maintainable and fault tolerant?

Many data centers claim to be N+1 or N+2 redundant.  Sounds great!  But data center facilities have to maintain their equipment, too.  The question you need to ask is, what impact does concurrent maintenance have on the fault tolerance of the data center?  Pose the following scenario on your facility tour:  generator #1, UPS #2, and CRAH unit #3 are unavailable due to maintenance; the facility loses utility power and generator #4, UPS #5 and CRAH unit #6 all fail to operate.  What happens to their customers?

Your critical IT infrastructure operates in a world where utility outages or equipment failures happen.  In the above case, N+1 redundancy won’t protect you.  Your IT infrastructure needs isolation from multiple simultaneous events (N+2 or 2N redundancy at every level).

What are the average and maximum power densities of the facility on a watts per square foot AND watts per cabinet basis?

About 10 years ago, the first data centers were designed for much lower rack power densities than are required today.  As a work-around, some facilities space cabinets farther apart (while charging you more) to accommodate higher-density clients. Most data centers are built to support an average of 100W – 175W per square foot with more modern data centers supporting an average of 225W per square foot and scalable on an individual basis up to 400W per square foot.

Cabinet power densities are equally important. Ten years ago, a 2kW cabinet was sufficient to power a full 42U of x86 servers. With today’s multi-core, high density blade servers, 8kW – 10kW is the new rack power minimum.  Expect your required power density to climb and make sure your data center has the infrastructure to grow with you.

How often does the data center load test its generators?

Fuel consumption and expensive test equipment makes load testing generators a costly maintenance item.  Sometimes, in lieu of regular load testing, data centers will use unexpected utility power outages as a way to “load test” their generators on client IT loads.  If they aren’t regularly load testing, then they are likely to only identify generator problems when utility power fails, which is precisely the wrong time to find an issue. Ask your data center provider if they put every generator on an extended load test – not just a start-up – at least quarterly.

What are the highest risk natural disasters for the area, and what has the data center done to mitigate their impact?

Every data center is subject to natural disasters, but some are more vulnerable than others. The Uptime Institute has published an excellent study of composite risk from earthquakes, hurricanes, tornadoes, snow and other disasters. There are relatively few regions in the U.S. that fit in the low-risk zones.  Another scenario: the data center survives a massive earthquake, but utility power is still out with no estimated repair timeline.  How long can the facility’s generators run on normal on-hand fuel supply and how many suppliers are contracted for refuel services?  It is critically important that you a) understand what disaster scenarios are likely for the facility; and b) work with your provider to make contingency plans based on likely risks.

What are the minimum skill sets of the remote hands and eyes staff?

It is an absolute certainty that at some point your equipment will need to be physically serviced. You can either drive to the data center or use their remote hands and eyes services. Some data centers cut corners by using security guards to provide remote services.  You want to make sure the remote hands staff provided by the data center consists of true IT professionals. Ask for minimum job requirements and speak with the service manager so that you know who is answering the phone at 2 AM.  Quality remote hands staff can reduce the relative importance of proximity in your decision making process.

What certifications has the data center earned, and do they undergo annual audits to maintain them?

SSAE 16, PCI DSS, LEED Gold, Energy Star, FISMA, HIPAA, SCIF, Tier IV. Each standard is a useful tool in differentiating your data center choices.  If you process credit cards, you want your upstream providers to support your PCI compliance by maintaining their own PCI ROC.  If you are a financial organization, you need your data center to be SSAE 16.  If your company is environmentally conscious, especially if you purchase carbon credits to offset your power consumption, you want a data center with EPA ENERGY STAR and LEED Gold certifications. Ask your data center for proof of their certifications.  This information is invaluable because it represents an independent analysis of the facility’s quality, reliability and security.

Bonus question #7 – Does the physical security include sharks with frickin’ laser beams attached to their heads?

Why? Because it would be cool if it did. Look under the raised floor sometime. Maybe there’s something interesting down there. . .

Monday, April 23, 2012

Power to your cloud

Eaton Power provides guidance on powering the cloud as part ofFOCUS magazine's special on Cloud Infrastructure

23 April 2012 by Ambrose McNevin - DatacenterDynamics

Power to your cloud

Chris Loeffler, of Eaton Power, told FOCUS that when considering its cloud strategy the first question Eaton asked was: The cloud — is it different?

Once identified as different, came efforts to understand the speed of cloud architecture adoption. This meant a change in the view of IT infrastructure through software development to, for example, manage virtualization, or specifically to allow the management of power within a virtualized system. The company’s product for management of power across multiple devices is, hence, aptly named Foreseer.

“Being in the critical power and electrical infrastructure space we already build robust infrastructure. We really started looking heavily at our software platform and understanding what our customers need from it,” Loeffler says. The key is to make everything more automated – so that the virtual layer can react to what’s going on in the hardware infrastructure.

Eaton Power’s advice on what to consider for powering your cloud
Modular Uninterruptible Power Supply (UPS) systems: 
These let you add capacity quickly and incrementally. A modular scalable UPS for a small cloud environment may provide up to 50kW or 60kW of capacity in 12kW building blocks that fit in standard equipment racks. IT personnel can simply plug in another 12kW unit, growing capacity (in this example) from as little as 12kW up to 60kW N+1.

Deploy a passive cooling system: Passive cooling systems employ enclosures equipped with a sealed rear door and a chimney, which captures hot exhaust air from servers and vents it directly back into the return air ducts on CRAC units. The CRAC units then chill the exhaust air and re-circulate it.

Passive systems typically require a strong air flow “seal” from the front of the cabinet to the rear. By segregating hot air from cool air more thoroughly than ordinary hot aisle-cold aisle techniques, a properly-designed passive cooling system can cost-effectively keep even a blazingly hot 30kW server rack running at safe temperatures.

Solution: Construct Multiple Facility Rooms: Large data centers like those that supply public cloud services often house UPS equipment in a dedicated facility room adjacent to the server floor. Setting up two facility rooms, one for UPS and power system electrical components and the other for UPS batteries, can be an even more efficient arrangement.

While UPS electronics can typically operate safely at 35°C/95F, UPS batteries must usually be kept at 25°C/77F.

Conduct a power chain audit: Organizations planning to add a cloud infrastructure to an existing data center should include a thorough power chain audit in their pre-deployment planning. A power chain audit can help you evaluate your power systems and determine which, if any, should be upgraded, augmented or modernized.

Add redundancy to your power architecture: N+1: An N+1 architecture includes one more UPS, generator or other power component than the minimum required to keep server equipment up and running. An N+1 architecture is often sufficient for the needs of a small or medium cloud environment.

2(N): A good choice for large cloud environments, 2(N) architectures feature two separate but identical power paths, each of which is capable of supporting an entire infrastructure on its own. Under normal conditions, both paths operate at 50% of capacity. Should one path experience planned or unplanned downtime, the other can compensate by temporarily running at 100% of capacity.

Deploy replication software: Use software-based redundancy techniques such as replication. Replication solutions continuously capture changes as they occur on protected servers and then replicate them in near real time to backup servers.

Utilize live migration software: Capitalizing on the live migration functionality built into many server virtualization solutions is another effective software-based reliability strategy. Live migration systems like VMware’s vMotion solution enable administrators to move virtual servers almost instantaneously from one physical host to another in response to technical issues or maintenance requirements.

Integrated management software: Many cloud operators use separate management tools to monitor their server and power environments but integrated solutions are now available that allow administrators to manage physical servers, virtual servers, UPSs, PDUs and more all through a single console.

Power to your cloud @eatoncorp datacenter dcim

Friday, April 20, 2012

Chatsworth Products Brings Efficiency to Interop Las Vegas with Latest Aisle Containment Solution http://bit.ly/JU4Xig
FlexPod Is First Validated DataCenter Infrastructure for Private Cloud Enabled by System Center 2012 http://ping.fm/X6FHj @cisco @NetApp

Thursday, April 19, 2012

Applying Uptime Institute Tier Topology to Modular Data Centers http://bit.ly/IpSogO datacenter datacenters datacentre @UptimeInstitute

Friday, April 13, 2012

DCIM: From fragmentation to convergence http://bit.ly/J5AJsd dcim datacenter datacentre cloud energy @UptimeInstitute

Friday, April 6, 2012

PUE is about Efficiency, Not Performance or Cost datacenter dcim http://bit.ly/HjpYTh

Monday, April 2, 2012

Blockbuster Quarter for Data Center Stocks datacenter cloud IT http://bit.ly/H9TVtY