Showing posts with label Capacity. Show all posts
Showing posts with label Capacity. Show all posts

Thursday, April 11, 2013

Converting IT pain to operational gain

Summary: Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology.


By  for Transforming the Datacenter | April 11, 2013 — 16:05 GMT (09:05 PDT)
This content is produced by our sponsor Microsoft.  It is not written by ZDNet editorial staff.
Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology. The moving parts involved include:
  • Budget
  • Technical Transformation
  • Architecture
Each of these has significant bearing on the future. We have been riding the leading edge of a technology wave that is changing how we work, the computing devices we use, and the back-end systems that support our operations, and how we adapt to that will separate winners from losers.
The changes have come quickly, and the innovations continue, often faster than big companies can respond. Much of this has happened during a slow economy and at a time when controlling costs is front and center on every IT manager’s to-do list. To further complicate things, government regulations are mandating long-term retention of data, which stresses capacity. Business managers are also demanding more sophisticated analyses of all this information to help them better market to customers. To top it off, ad hoc design changes generally lead to increased complexity over time, which increases the probability of error and triggers additional operational costs.
While these are not trivial matters, there is plenty of hope despite the challenges.
IT decision-makers can even achieve transformation to some extent with their current infrastructures for now, as long as there is some flexibility and agility baked in, and the platform is stable.
As the economy begins to improve and companies can no longer ignore the depth of technical transformation that has occurred in recent years, many executives have become impatient at the underperformance of their datacenters. While there are no quick fixes, the good news is that by taking a fresh look at IT strategy and the design of the datacenter, it becomes clear that datacenter transformation is possible.
Transformation is essential to remain competitive in today’s fast-moving world. The way I see it, the path forward is to not think in terms of patching the datacenter. Organizations instead need to assess their IT needs in the context of longer-term business objectives and act accordingly.
Let’s admit it: it is no small endeavor. Your datacenter needs to help you acquire and analyze the business intelligence you need to run smart operations. You need to be able to support a mobile workforce and communicate with mobile customers. You will also need to learn how best to manage those communications, as well as support and leverage machine-to-machine communications.
The silver lining is that addressing these needs is the first step in architecting the datacenter that meets your operational requirements and then you can think about what’s next.

Wednesday, June 6, 2012

#DCIM Yields Return on Investment

DCIM Yields Return on Investment

By: Michael Potts

As with any investment in the data center, the question of the return on the investment should be raised before purchasing a Data Center Infrastructure Management (DCIM) solution. In the APC white paper, “How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Costs,” the authors highlight the savings from a DCIM solution saying, “The deployment of modern planning tools can result in hundreds of man-hours saved per year and thousands of dollars saved in averted downtime costs.”

DCIM will not transform your data center overnight, but it will begin the process. While it isn’t necessary to reach the full level of maturity before seeing benefits, the areas of benefit are significant and can bring results in the short-term. The three primary methods in which DCIM provides ROI are:

  • Improved Energy Efficiency
  • Improved Availability
  • Improved Manageability

DCIM LEADS TO IMPROVED ENERGY EFFICIENCY

In his blog, Dan Fry gets right to the heart of DCIM’s role in improving energy efficiency when he says, “To improve energy efficiency inside the data center, IT executives need comprehensive information, not isolated data. They need to be able to ‘see’ the problem in order to manage and correct it because, as we all know, you can’t manage what you don’t understand.”

The information provided by DCIM can help data center managers in reducing energy consumption:

MATCHING SUPPLY WITH DEMAND

Oversizing is one of the biggest roadblocks to energy efficiency in the data center. In an APC survey of data center utilization, only 20 percent of respondents had a utilization of 60 percent or more, while 50 percent had a utilization of 30 percent or less. One of the primary factors for oversizing is the lack of power and cooling data to help make informed decisions on the amount of infrastructure required. DCIM solutions can provide information on both demand and supply to allow you to “right-size” the infrastructure, reducing overall energy costs by as much as 30 percent.

IDENTIFYING UNDER-UTILIZED SERVERS

As many as 10 percent of servers are estimated to be “ghost servers,” servers which are running no applications, yet still consume 70 percent or more of the resources of a fully-utilized server. DCIM solutions can help to find these under-utilized servers Which could be decommissioned, re-purposed or consolidated as well as servers which do not have power management functionality enabled, reducing IT energy usage as well as delaying the purchase of additional servers.

MEASURING THE IMPACT OF INFRASTRUCTURE CHANGES

DCIM tools can measure energy efficiency metrics such as Power Usage Effectiveness (PUE), Data Center Infrastructure Efficiency (DCiE) and Corporate Average Datacenter Efficiency (CADE). These metrics serve to focus attention on increasing the energy efficiency of data centers and to measure the results of changes to the infrastructure. In the white paper “Green Grid Data Center Power Efficiency Metrics: PUE and DCiE,” the authors lay out the case for the introduction of metrics to measure energy efficiency in the data center. The Green Grid believes that several metrics can help IT organizations better understand and improve the energy efficiency of their existing data centers as well as help them make smarter decisions on new data center deployments. In addition, these metrics provide a dependable way to measure their results against comparable IT organizations.

IMPROVED AVAILABILITY

DCIM solutions can improve availability in the following areas:

Understanding the Relationship Between Devices
A DCIM solution can help to answer questions such as “What systems will be impacted if I take the UPS down for maintenance?” It does this by understanding the relationship between devices, including the ability to track power and network chains. This information can be used to identify single points of failure and reduce downtime due to both planned and unplanned events.

Improved Change Management
When investigating an issue, examination of the asset’s change log allows problem managers to recommend a fix over 80 percent of the time, with a first fix rate of over 90 percent. This reduces the mean time to repair and increases system availability. DCIM systems which automate the change management process will log both authorized and unauthorized changes, increasing the data available to the problem manager and increasing the chances the issue can be quickly resolved.

Root Cause Analysis
One of the problems sometimes faced by data center managers is too much data. Disconnecting a router from the network might cause tens or hundreds of link lost alarms for the downstream devices. It is often difficult to find the root cause amidst all of the “noise” associated with cascading events. By understanding the relationship between devices, DCIM solution can help to narrow the focus to the single device — the router, in this case — which is causing the problem.  By directing focus on the root cause, the problem can be resolved more quickly, reducing the associated downtime.

IMPROVED MANAGEABILITY

DCIM solutions can improve manageability in the following areas:

Data Center Audits
Regulations such as Sarbanes-Oxley, HIPA and CFR-11 increase the requirements for physical equipment audits. DCIM solutions provide a single source of the data to greatly reduce the time and cost to complete the audits. Those DCIM tools utilizing asset auto-discovery and asset location mechanisms such as RFID can further reduce the effort to perform a physical audit.

Asset Management
DCIM can be used to determine the best place to deploy new equipment based on the availability of rack space, power, cooling and network ports. It then can be used to track all of the changes from the initial request through deployment, system moves and changes, all the way through to decommissioning. The DCIM solution can provide detailed information on thousands of assets in the data center including location, system configuration, how much power it is drawing, relationship to other devices, and so on, without having to rely on spreadsheets or home-grown tools.

Capacity Planning
With a new or expanded data center representing a substantial capital investment, the ability to postpone new data center builds could save millions of dollars. DCIM solutions can be used to reclaim capacity at the server, rack and data center levels to maximize space, power and cooling resources. Using actual device power readings instead of the overly conservative nameplate values will allow an increase in the number of servers supported by a PDU without sacrificing availability. DCIM tools can track resource usage over time and provide much more accurate estimates of when additional equipment needs to be purchased.


This is the fifth article in the Data Center Knowledge Guide to DCIM series. To download the complete DCK Guide to DCIM click here.

Thursday, May 31, 2012

Selecting a #DCIM Tool to Fit your #DataCenter ?

How Do I Select a DCIM Tool to Fit My Data Center?

  • By: Michael Potts

Dcim_focus_21_version_2_2

Although similar in many respects, every data center is unique. In choosing a Data Center Infrastructure Management (DCIM) solution, data center managers might choose very different solutions based on their needs.  It is somewhat analogous to two people choosing a lawn care service. One might simply want the grass mowed once a week.  The other might want edging, fertilizing, seeding and other services in addition to mowing.  As a result, they may choose different lawn service companies or, at the least, expect to pay very different amounts for the service they will be receiving.  Before choosing a DCIM solution, it is important to first know what it is you want to receive from the solution.

It is also important to remember that DCIM cannot single-handedly do the job of data center management.  It is only part of the overall management solution. While the DCIM tools, or sometimes a suite of tools working together, are a valuable component, a complete management solution must also incorporate procedures which allow the DCIM tools to be effectively used.

CHOOSING A DCIM SOLUTION

It is important to remember that DCIM solutions are about providing information. The question which must be asked (and answered) prior to choosing a DCIM solution is “What information do I need in order to manage my data center?” The answer to this question is the key to helping you choose the DCIM solution which will best suit your needs. Consider the following two data centers looking to purchase a DCIM solution.

DATA CENTER A

Data Center A has a lot of older, legacy equipment which is being monitored using an existing Building Management System (BMS). The rack power strips do not have monitoring capability. The management staff currently tracks assets using spreadsheets and Visio drawings. The data has not been meticulously maintained, however, and has questionable accuracy. The primary management goal is getting a handle on the assets they have in the data center.

DATA CENTER B

Data Center B is a new data center. It has new infrastructure equipment which can be remotely monitored through Simple Network Management Protocol (SNMP). The racks are equipped with metered rack PDUs. The primary management goals are to (1) collect and accurately maintain asset data, (2) monitor and manage the power and cooling infrastructure, and (3) monitor server power and CPU usage.

DIFFERENT DCIM DEPLOYED

While both data center operators would likely benefit from DCIM, they may very well choose different solutions. The goal for Data Center A is to more accurately track the assets in the data center. They may choose to pre-load the data they have in spreadsheets and then verify the data. If so, they will want a DCIM which will allow them to load data from spreadsheets. If they feel their current data is not reliable, they may instead choose to start from ground zero and collect all of the data manually.

If so, loading the data from a spreadsheet might be a desirable feature but is no longer a hard requirement.  Since the infrastructure equipment is being monitored using a BMS, they might specify integration with their existing BMS as a requirement for their DCIM.

Data Center B has entirely different requirements. It doesn’t have existing data in spreadsheets, so they need to collect the asset data as quickly and accurately as possible. They may specify auto-discovery as a requirement for their DCIM solution. In addition, they have infrastructure equipment which needs to be monitored, so they will want the DCIM to be able to collect real-time data down to the rack level. Finally, they want to be able to monitor server power and CPU usage, so they will want a DCIM which can communicate with their servers.

Prior to choosing a DCIM solution, spend time determining what information is required to manage the data center. Start with the primary management goals such as increasing availability, meeting service level agreements, increasing data center efficiency and providing upper-level management reports on the current and future state of the data center. Next, determine the information that you need to accomplish these high-level goals. A sample of questions you might ask includes the following:

  • What data do I need to measure availability?
  • What data do I need to measure SLA compliance?
  • What data do I need to measure data center efficiency?
  • What data do I need to forecast capacity of critical resources?
  • What data do I need for upper-level management reports?

DEFINING REQUIREMENTS

These questions will begin to define the scope of the requirements for a DCIM solution. As you start to narrow down the focus of the questions, you will also be defining more specific DCIM requirements.

For example, you might start with a requirement for the DCIM to provide real-time monitoring. This is still rather vague, however, so additional questions must be asked to narrow the focus.

How do you define “real-time” data? To some, real-time data might mean thousands of data points per second with continuous measurement. To others, it might mean measuring data points every few minutes or once an hour. There is a vast difference between a system which does continuous measurement and one which measures once an hour. Without knowing how you are going to use the data, you will likely end up buying the wrong solution. Either you will purchase a solution which doesn’t provide the data granularity you want or you will over-spend on a system which provides continuous measurement when all you want is trending data every 15 minutes.

What data center equipment do you want to monitor?
 The answer to this question may have the biggest impact on the solution you choose. If you have some data center equipment which communicates using SNMP and other equipment which communicates using Modbus, for example, you will want to choose a DCIM solution which can speak both of these protocols. If you want the DCIM tool to retrieve detailed server information, you will want to choose a DCIM solution which can speak IPMI and other server protocols. Prior to talking to potential DCIM vendors, prepare a list of equipment with which you want to retrieve information.

Similar questions should be asked for each facet of DCIM — asset management, change management, real-time monitoring, workflow, and so on — to form a specific list of DCIM requirements. Prioritize the information you need so you can narrow your focus to those DCIM solutions which address your most important requirements.

http://www.datacenterknowledge.com/archives/2012/05/31/selecting-dcim-tools-f... 

Integration-approach-to-dcim-yields-best-results-image-1

Thursday, May 24, 2012

Why Do I Need #DCIM ?

by Micahel Potts

There are a number of benefits in implementing a Data Center Infrastructure System (DCIM) solution.  To illustrate this point, consider the primary components of data center management.

In the Design phase, DCIM provides key information in designing the proper infrastructure.  Power, cooling and network data at the rack level help to determine the optimum placement of new servers.  Without this information, data center managers have to rely on guesswork to make key decisions on how much equipment can be placed into a rack.  Too little equipment strands valuable data center resources (space, power and cooling).  Too much equipment increases the risk of shutdown due to exceeding the available resources.

In the Operations phase, DCIM can help to enforce standard processes for operating the data center.  These consistent, repeatable processes reduce operator errors which can account for as much as 80% of system outages.

In the Monitoring phase, DCIM provides operational data, including environmental data (temperature, umidity, air flow), power data (at the device, rack, zone and data center level), and cooling data.  In addition, DCIM may also provide IT data such as server resources (CPU, memory, disk, network).  This data can be used to alert management when thresholds are exceeded, reducing the mean time to repair and increasing availability.

In the Predictive Analysis phase, DCIM analyzes the key performance indicators from the monitoring phase as key input into the planning phase. Capacity planning decisions are made based during this phase.  Tracking the usage of key resources over time, for example, can provide valuable input to the decision on when to purchase new power or cooling equipment.

In the Planning phase, DCIM can be used to analyze “what if” scenarios such as server refreshes, impact of virtualization, and equipment moves, adds and changes. If you could summarize DCIM in one word, it would be information.  Every facet of data center management revolves around having complete and accurate information.

DCIM provides the following benefits:

•  Access to accurate, actionable data about the current state and future needs of the data center

•  Standard procedures for equipment changes

•  Single source of truth for asset management

•  Better predictability for space, power and cooling capacity means increased time to plan

•  Enhanced understanding of the present state of the power and cooling infrastructure and environment increases the overall availability of the data center

•  Reduced operating cost from energy usage effectiveness and efficiency

In his report, Datacenter Infrastructure Management Software: Monitoring, Managing and Optimizing the Datacenter, Andy Lawrence summed up the impact of DCIM by saying “We believe it is difficult to achieve the more advanced levels of datacenter maturity, or of datacenter effectiveness generally, without extensive use of DCIM software.”  He went on to add that “The three main drivers of nvestment in DCIM software are economics (mainly through energy-related savings), improved availability, and mproved manageability and flexibility.”

One of the primary benefits of DCIM is the ability to answer questions such as the following:

1. Where is my data center asset located?

2. Where is the best place to place a new server?

3. Do I have sufficient space, power, cooling and network connectivity to provide my needs for the next months?  Next year?  Next five years?

4. An event occurred in the data center — what happened, what services are impacted, where should the technicians go to resolve the issue?

5. Do I have underutilized resources in my data center?

6. Will I have enough power or cooling under fault or maintenance conditions?

Without the information provided by DCIM, the questions become much more difficult to answer.

Thursday, May 3, 2012

Power Dense Data Centers Seek Thermal Controls

Jeff Klaus is director, Intel Data Center Manager (DCM) Solutions. Jeff leads a global team that designs, builds, sells and supports Intel DCM.JEFF KLAUSIntelYour data center has a maximum power capacity that must cover both server and IT device power consumption and thermal cooling requirements. Balancing these two rivaling demands has become more difficult in recent years as data center power consumption has increased from an average of 500 watts per square foot to today’s average of 1,500 watts per square foot! The thermal effect of more high-performance-density (HPD) hardware has frequently led to greater data center heat production.One way to address this increase in data center heat production is to achieve a more efficient thermal cooling infrastructure[1] using the following emerging best practices for thermal monitoring and control.Build Real-time Thermal Data Center MapsReal-time thermal sensors on every server platform enable building real-time thermal maps of the data center. Real-time monitoring of power and thermal events in individual servers and racks, in addition to sections of rooms, enables you, as the manager, to proactively identify failure situations, and then take action based on the specific situations, be they over-cooling, under-cooling, hot spots and/or computer room air conditioning (CRAC) failures.These maps can then be used to create thermal profiles that record and report thermal trends and events for long-term planning as well.By monitoring and controlling CRAC supply temperatures based on real-time data center ambient inlet temperature, you can further identify hotspots. Both over-cooling and under-cooling are frequently due to the lack of information regarding actual ambient temperatures for data center racks and room levels.Further, a real-time thermal profile map reports the thermal trends to justify how much to increase operational temperatures for a potentially significant reduction in cooling energy costs. In several pilot projects with data centers located around the world, the Intel Data Center Manager (DCM) Solutions team has witnessed that increasing energy efficiency and raising the temperature based on accurate readings can net $50,000/year in savings for every degree the data center is raised.Actual v. Theoretical DataMuch of today’s available power and thermal data is based on estimated or manufacturers’ power ratings (name plates), not on actual consumption. This data can deviate from consumption by as much as 40 percent.Real-time rack- and room-level thermal mapping identifies cooling efficiency that enables you to activate the appropriate cooling action needed sooner rather than later. Sun Microsystems reported data center managers can save four percent in energy costs for every degree of upward change in the temperature set point.[2] It is also reported that cooling can account for 40-50 percent of the total amount of energy used in data centers. Improving efficiencies in this area has a significant impact on the overall operating costs.Case StudyMicrosoft wanted to find out how much money can be saved by raising the cooling set point in the data center. The company tested the impact of slightly higher temperatures in its Silicon Valley data center. “We raised the floor temperature two to four degrees, and saved $250,000 in annual energy costs,” said Don Denning, Critical Facilities Manager at Lee Technologies, which worked with Microsoft on the project.[3]When CIOs and their facilities teams wrestle with their HPD data centers’ rivaling demands for more server power and greater thermal cooling efficiencies, ensure these best practices are part of your thermal controls’ planning.

Monday, April 30, 2012

Driving Under the Limit: Data Center Practices That Mitigate Power Spikes

 April 30, 2012

 

Every server in a data center runs on an allotted power cap that is programmed to withstand the peak-hour power consumption level. When an unexpected event causes a power spike, however, data center managers can be faced with serious problems. For example, in the summer of 2011, unusually high temperatures in Texas created havoc in data centers. The increased operation of air conditioning units affected data center servers that were already running close to capacity.

Preparedness for unexpected power events requires the ability to rapidly identify the individual servers at risk of power overload or failure. A variety of proactive energy management best practices can not only provide insights into the power patterns leading up to problematic events, but can offer remedial controls that avoid equipment failures and service disruptions.

Best Practice: Gaining Real-Time Visibility

Dealing with power surges requires a full understanding of your nominal data center power and thermal conditions. Unfortunately, many facilities and IT teams have only minimal monitoring in place, often focusing solely on return air temperature at the air-conditioning units.

The first step toward efficient energy management is to take advantage of all the power and thermal data provided by today’s hardware. This includes real-time server inlet temperatures and power consumption data from rack servers, blade servers, and the power-distribution units (PDUs) and uninterrupted power supplies (UPSs) related to those servers. Data center energy monitoring solutions are available for aggregating this hardware data and for providing views of conditions at the individual server or rack level or for user-defined groups of devices.

Unlike predictive models that are based on static data sets, real-time energy monitoring solutions can uncover hot spots and computer-area air handler (CRAH) failures early, when proactive actions can be taken.

By aggregating server inlet temperatures, an energy monitoring solution can help data center managers create real-time thermal maps of the data center. The solutions can also feed data into logs to be used for trending analysis as well as in-depth airflow studies for improving thermal profiles and for avoiding over- or undercooling. With adequate granularity and accuracy, an energy monitoring solution makes it possible to fine-tune power and cooling systems, instead of necessitating designs to accommodate the worst-case or spike conditions.

Best Practice: Shifting From Reactive to Proactive Energy Management

Accurate, real-time power and thermal usage data also makes it possible to set thresholds and alerts, and it introduce controls that enforce policies for optimized service and efficiencies. Real-time server data provides immediate feedback about power and thermal conditions that can affect server performance and ultimately end-user services.

Proactively identifying hot spots before they reach critical levels allows data center managers to take preventative actions and also creates a foundation for the following:

  •  Managing and billing for services based on actual energy use
  • Automating actions relating to power management in order to minimize the impact on IT or facilities teams
  • Integrating data center energy management with other data center and facilities management consoles.

Best Practice: Non-Invasive Monitoring

To avoid affecting the servers and end-user services, data center managers should look for energy management solutions that support agentless operation. Advanced solutions facilitate integration, with full support for Web Services Description Language (WSDL) APIs, and they can coexist with other applications on the designated host server or virtual machine.

Today’s regulated data centers also require that an energy management solution offer APIs designed for secure communications with managed nodes.

Best Practice: Holistic Energy Optimization

Real-time monitoring provides a solid foundation for energy controls, and state-of-the-art energy management systems provide enable dynamic adjustment of the internal power states of data center servers. The control functions support the optimal balance of server performance and power—and keep power under the cap to avoid spikes that would otherwise exceed equipment limits or energy budgets.

Intelligent aggregation of data center power and thermal data can be used to drive optimal power management policies across servers and storage area networks. In real-world use cases, intelligent energy management solutions are producing 20–40 percent reductions in energy waste.

These increases in efficiency ameliorate the conditions that may lead to power spikes, and they also enable other high-value benefits including prolonged business continuity (by up to 25 percent) when a power outage occurs. Power can also be allocated on a priority basis during an outage, giving maximum protection to business-critical services.

Intelligent power management for servers can also dramatically increase rack density without exceeding existing rack-level power caps. Some companies are also using intelligent energy management approaches to introduce power-based metering and energy cost charge-backs to motivate conservation and more fairly assign costs to organizational units.

Best Practice: Decreasing Data Center Power Without Affecting Performance

A crude energy management solution might mitigate power surges by simply capping the power consumption of individual servers or groups of servers. Because performance is directly tied to power, an intelligent energy management solution dynamically balances power and performance in accordance with the priorities set by the particular business.

The features required for fine-tuning power in relation to server performance include real-time monitoring of actual power consumption and the ability to maintain maximum performance by dynamically adjusting the processor operating frequencies. This requires a tightly integrated solution that can interact with the server operating system or hypervisor using threshold alerts.

Field tests of state-of-the-art energy management solutions have proven the efficacy of an intelligent approach for lowering server power consumption by as much as 20 percent without reducing performance. At BMW Group,[1]for example, a proof-of-concept exercise determined that an energy management solution could lower consumption by 18 percent and increase server efficiency by approximately 19 percent.

Similarly, by adjusting the performance levels, data center managers can more dramatically lower power to mitigate periods of power surges or to adjust server allocations on the basis of workloads and priorities.

Conclusions

Today, the motivations for avoiding power spikes include improving the reliability of data center services and curbing runaway energy costs. In the future, energy management will likely become more critical with the consumerization of IT, cloud computing and other trends that put increased service—and, correspondingly, energy—demands on the data center.

Bottom line, intelligent energy management is a critical first step to gaining control of the fastest-increasing operating cost for the data center. Plus, it puts a data center on a transition path towards more comprehensive IT asset management. Besides avoiding power spikes, energy management solutions provide in-depth knowledge for data center “right-sizing” and accurate equipment scheduling to meet workload demands.

Power data can also contribute to more-efficient cooling and air-flow designs and to space analysis for site expansion studies. Power is at the heart of optimized resource balancing in the data center; as such, the intelligent monitoring and management of power typically yields significant ROI for best-in-class energy management technology.

Wednesday, April 25, 2012

Data Center Executives Must Address Many Issues in 2012

Analyst(s): Mike Chuba

VIEW SUMMARY

Seemingly insatiable demand for new workloads and services at a time when most budgets are still constrained is the challenge of most data center executives. We look at the specific areas they identified going into 2012.

Overview

Data center executives are caught in an awkward phase of the slow economic recovery, as they try to support new initiatives from the business without a commensurate increase in their budgets. Many will need to improve the efficiency of their workloads and infrastructure to free up money to support these emerging initiatives.

Key Findings

  • Data center budgets are not growing commensurate with demand.
  • Expect an 800% growth in data over the next five years, with 80% of it being unstructured.
  • Tablets will augment desktop and laptop computers, not replace them.
  • Data centers can consume 100 times more energy than the offices they support.
  • The cost of power is on par with the cost of the equipment.

Recommendations

  • It is not the IT organization's job to arrest the creation or proliferation of data. Rather, data center managers need to focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.
  • Focus short term on cooling, airflow and equipment placement to optimize data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency.
  • Put in place security, data storage and usage guidelines for tablets and other emerging form factors in the short term, while deciding on your long-term objectives for support.
  • Use a business impact analysis to determine when, where and why to adopt cloud computing.

What You Need to Know

New workloads that are key to enterprise growth, latent demand for existing workloads as the general economy recovers, increased regulatory demands and the explosion in data growth all pose challenges for data center executives at a time when the budget is not growing commensurate with demand. Storage growth continues unabated. It is not unusual to hear sustained growth rates of 40% or more per year. To fund this growth, most organizations will have to reallocate their budgets from other legacy investment buckets. At the same time, they must focus on storage optimization to manage demand, availability and efficiency.

Analysis

"Nothing endures but change" is a quote attributed to Heraclitus, who lived over 2,500 years ago. However, his words seem applicable to the data center executive today. Pervasive mobility, a business environment demanding access to anything, anytime, anywhere and the rise of alternative delivery models, such as cloud computing, have placed new pressures on the infrastructure and operations (I&O) organization for support and speed. At the same time, a fitful economic environment has not loosened the budget purse strings sufficiently to fund all the new initiatives that many I&O organizations have identified.

This challenge of supporting today's accelerated pace of change, and delivering the efficiency, agility and quality of services their business needs to succeed was top of mind for the more than 2,600 data center professionals gathered in Las Vegas on 5 December to 8 December 2011 for the annual Gartner U.S. Data Center Conference. It was a record turnout for this annual event, now in its 30th year. Our conference theme, "Heightened Risk, Unbounded Opportunities, Managing Complexity in the Data Center," spoke to the difficult task our attendees face while addressing the new realities and merging business opportunities at a time when the economic outlook is still uncertain. The data center is being reshaped, as the transformation of IT into a service business has begun.

Our agenda reflected the complex, interrelated challenges confronting attendees. Attendance was particularly strong for the cloud computing and data center track sessions, followed by the storage, virtualization and IT operations track. The most popular analyst-user roundtables focused on these topics, and analysts in these spaces were in high demand for one-on-one meetings. We believe that the best-attended sessions and the results of the surveys conducted at the conference represent a reasonable benchmark for the kinds of issues that organizations will be dealing with in 2012.

We added a new track this year focused on the impact of mobility on I&O. The rapid proliferation of smart devices, such as tablets and smartphones, is driving dramatic changes in business and consumer applications and positively impacting bottom-line results. Yet, I&O plays a critical role in supporting these applications rooted in real-time access to corporate data anytime and anywhere and in any context, while still providing traditional support to the existing portfolio of applications and devices. As the next billion devices wanting access to corporate infrastructure are deployed, I&O executives have an opportunity to exhibit leadership and innovation — from contributing to establishing corporate standards, to anticipating the impact on capacity planning, to minimizing risk.

Electronic interactive polling is a significant feature of the conference, allowing attendees to get instantaneous feedback on what their peers are doing. The welcome address posed a couple of questions that set the tone for the conference. Attendees were first asked how their 2012 I&O budgets compared with their previous years' budgets (see Figure 1).

Figure 1. Budget Change in Coming Year vs. Current Year Spending
Figure 1. Budget Change in Coming Year vs. Current Year Spending

Source: Gartner (January 2012)

Comparing year-over-year data, we find almost identical numbers reporting budgetary growth (42%) and reduced budgets (26% vs. 25%). The most recent results reflect a gradual, but still challenging, economic climate. While hardly robust, it is a marked improvement from the somber mood that most end-user organizations were in at the end of 2008 and entering 2009. Subsequent track sessions that focused on cost optimization strategies and best practices were universally well attended throughout the week.

Now, modest budget changes may not be enough to sustain current modes of IT operations, let alone support emerging business initiatives. Organizations need to continue to look closely at improving efficiencies and pruning legacy applications that are on the back side of the cost-benefit equation, to free up the budget and lay the groundwork to support emerging workloads/applications.

The second issue we raised in the opening session was for attendees to identify the most significant data center challenge they will face in 2012, compared with previous years (see Figure 2; note that the voting options changed from year to year).

Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)
Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)

Source: Gartner (January 2012)

What was interesting was the more balanced distribution across the options. For those who have the charter to manage the storage environment, managing storage growth is an extremely challenging issue.

Top Five Challenges

NO. 1: DATA GROWTH

Data growth continues unabated, leaving IT organizations struggling to deal with how to fund the necessary storage capacity, how to manage these devices if they can afford them, and how they can archive and back up this data. Managing and storing massive volumes of complex data to support real-time analytics is increasingly becoming a requirement for many organizations, driving the need for not just capacity, but also performance. New technologies, architectures and deployment models can enable significant changes in storage infrastructure and management best practices now and in coming years, and assist in addressing these issues. We believe that it is not the job of IT to arrest the creation or proliferation of data. Rather, IT should focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.

Tactically prioritize a focus on deleting data that has outlived its usefulness, and exploit technologies that allow for the reduction of redundant data.

NO. 2: DATA CENTER SPACE, POWER AND/OR COOLING

It is not surprising that data center space, power and/or cooling was identified as the second biggest challenge by our attendees. Data centers can consume 100 times more energy than the offices they support, which draws more budgetary attention in uncertain times. During the past five years, the power demands of equipment have grown significantly, imposing enormous pressures on the capacity of data centers that were built five or more years ago. Data center managers are grappling with cost, technology, environmental, people and location issues, and are constantly looking for ways to deliver a highly available, secure, flexible server infrastructure as the foundation for the business's mission-critical applications. On top of this is the building pressure to create a green environment. Our keynote interview with Frank Frankovsky, director of hardware design and supply chain at Facebook, drew considerable interest because of some of the novel approaches that company was taking to satisfy its rather unique computing requirements.

We recommend that data center executives focus short term on cooling, airflow and equipment placement to optimize their data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency. We believe that the decline in priority shown in the survey results reflects the fact that organizations have been focusing on improved efficiency of their data centers. Changes are being implemented and results are being achieved.

NO. 3: PRIVATE/PUBLIC CLOUD STRATEGY

Developing a private/public cloud strategy was the third most popular choice as the top priority, and mirrors the results we have seen in Gartner's separate surveys regarding the top technology priorities of CIOs. With many organizations well on their way to virtualized infrastructures, many are now either actively moving toward, or being pressured to move toward, cloud-based environments. Whether it is public, private or some hybrid version of cloud, attendees' questions focused on where do you go, how do you get there, and how fast should you move toward cloud computing.

We recommend that organizations develop a business impact analysis to determine when, where and why to adopt cloud computing. Ascertain where migrating or enhancing applications can deliver value, and look for the innovative applications that could benefit from unique cloud capabilities.

NO. 4 AND NO. 5: BUSINESS NEEDS

"Modernizing of our legacy applications" was fourth as the greatest challenge, and "Identifying and translating business requirements" was fifth and, in many ways, both relate to similar concerns. Meeting business priorities; aligning with shifts in the business; and bringing much-needed agility to legacy applications that might require dramatic shifts in architectures, processes and skill sets were common concerns among Data Center Conference attendees, in general.

We believe virtualization's decline as a top challenge reflects the comfort level that attendees have in the context of x86 server virtualization, and most of this conference's attendees are well down that path — primarily with VMware, but increasingly with other vendors as well. Our clients see the private cloud as an extension of their virtualization efforts; thus, interest in virtualization isn't waning, but is evolving to private cloud computing. Now is a good time to evaluate your virtualization "health" — processes, management standards and automation readiness. For many organizations, it is an appropriate time to benchmark their current virtualization approach against competitors and alternate providers, and broaden their virtualization initiatives beyond just the servers and across the portfolio — desktop, storage, applications, etc.

This year promises to be one of further market disruption and rapid evolution. Vendor strategies will be challenged and new paradigms will continue to emerge. To stay ahead of the industry curve, plan to join your peers at the 2012 U.S. Data Center Conference on 3 December to 6 December in Las Vegas.

Saturday, October 1, 2011

Seven best practices for increasing efficiency, availability and capacity: the enterprise datacenter design guide http://bit.ly/q2v7YH

Saturday, July 30, 2011

dcim datacenter FreeWP Maximizing data center efficiency, capacity, availability thru integrated infrastructure http://bit.ly/nIMHzn

Friday, July 29, 2011

datacenter: Maximizing data center efficiency, capacity and availability through integrated infrastructure http://bit.ly/nIMHzn

Monday, July 25, 2011

DCIM Readiness on the Rise as Significant Data Center Capacity Remains Unused http://bit.ly/rrrha7

Tuesday, June 28, 2011

3 Steps for Better Data Center Capacity Planning http://bit.ly/jQUWc7