Showing posts with label data center analysis. Show all posts
Showing posts with label data center analysis. Show all posts

Thursday, April 11, 2013

Converting IT pain to operational gain

Summary: Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology.


By  for Transforming the Datacenter | April 11, 2013 — 16:05 GMT (09:05 PDT)
This content is produced by our sponsor Microsoft.  It is not written by ZDNet editorial staff.
Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology. The moving parts involved include:
  • Budget
  • Technical Transformation
  • Architecture
Each of these has significant bearing on the future. We have been riding the leading edge of a technology wave that is changing how we work, the computing devices we use, and the back-end systems that support our operations, and how we adapt to that will separate winners from losers.
The changes have come quickly, and the innovations continue, often faster than big companies can respond. Much of this has happened during a slow economy and at a time when controlling costs is front and center on every IT manager’s to-do list. To further complicate things, government regulations are mandating long-term retention of data, which stresses capacity. Business managers are also demanding more sophisticated analyses of all this information to help them better market to customers. To top it off, ad hoc design changes generally lead to increased complexity over time, which increases the probability of error and triggers additional operational costs.
While these are not trivial matters, there is plenty of hope despite the challenges.
IT decision-makers can even achieve transformation to some extent with their current infrastructures for now, as long as there is some flexibility and agility baked in, and the platform is stable.
As the economy begins to improve and companies can no longer ignore the depth of technical transformation that has occurred in recent years, many executives have become impatient at the underperformance of their datacenters. While there are no quick fixes, the good news is that by taking a fresh look at IT strategy and the design of the datacenter, it becomes clear that datacenter transformation is possible.
Transformation is essential to remain competitive in today’s fast-moving world. The way I see it, the path forward is to not think in terms of patching the datacenter. Organizations instead need to assess their IT needs in the context of longer-term business objectives and act accordingly.
Let’s admit it: it is no small endeavor. Your datacenter needs to help you acquire and analyze the business intelligence you need to run smart operations. You need to be able to support a mobile workforce and communicate with mobile customers. You will also need to learn how best to manage those communications, as well as support and leverage machine-to-machine communications.
The silver lining is that addressing these needs is the first step in architecting the datacenter that meets your operational requirements and then you can think about what’s next.

Wednesday, March 21, 2012

How data center managers can be prepared for the 10 most common surprises

1. Those high-density predictions finally are coming true: After rapid growth early in the century, projections of double-digit rack densities have been slow to come to fruition. Average densities hovered between 6.0 and 7.4 kW per rack from 2006 to 2009, but the most recent Data Center Users’ Group (DCUG) survey predicted average rack densities will reach 12.0 kW within three years. That puts a premium on adequate UPS capacity and power distribution as well as cooling to handle the corresponding heat output. 2. Data center managers will replace servers three times before they replace UPS or cooling systems: Server refreshes happen approximately every three years. Cooling and UPS systems are expected to last much longer—sometimes decades. That means the infrastructure organizations invest in today must be able to support—or, more accurately, scale to support—servers that may be two, three or even four generations removed from today’s models. What does that mean for today’s data center manager? It makes it imperative that today’s infrastructure technologies have the ability to scale to support future needs. Modular solutions can scale to meet both short- and long-term requirements. 3. Downtime is expensive: Everyone understands downtime is bad, but the actual costs associated with an unplanned outage are stunning. According to a Ponemon Institute study, an outage can cost an organization an average of about $5,000 per minute. That’s $300,000 in just an hour. The same study indicates the most common causes of downtime are UPS battery failure and exceeding UPS capacity. Avoid those problems by investing in the right UPS—adequately sized to support the load—and proactively monitoring and maintaining batteries. 4. Water and the data center do not mix – but we keep trying: The first part of this probably isn’t a surprise. Sensitive IT equipment does not respond well to water. However, the aforementioned Ponemon study indicates 35 percent of all unplanned outages are a result of some type of water incursion. These aren’t just leaky valves; in fact, many water-related outages are the result of a spilled coffee or diet soda. The fix: Check those valves, but more importantly, check the drinks at the door. 5. New servers use more power than old servers: Sever consolidation and virtualization can shrink server inventory by huge numbers, but that doesn’t exactly equate to huge energy savings. New virtualized servers, especially powerful blade servers, can consume four or five times as much energy as those from the preceding generation (although they usually do it more efficiently). The relatively modest savings at the end of that consolidation project may come as a surprise. There is no fix for this, but prepare for it by making sure the infrastructure is adequate to support the power and cooling needs of these new servers. 6. Monitoring is a mess: IT managers have more visibility into their data centers than ever before, but accessing and making sense of the data that comes with that visibility can be a daunting task. According to a survey of data center professionals, data center managers use, on average, at least four different software platforms to manage their physical infrastructure. Forty-one percent of those surveyed say they produce three or more reports for their supervisors every month, and 34 percent say it takes three hours or more to prepare those reports. The solution? Move toward a single monitoring and management platform. Today’s DCIM solutions can consolidate that information and proactively manage the infrastructure to improve energy and operational efficiency and even availability. 7. The IT guy is in charge of the building’s HVAC system: The gap between IT and Facilities is shrinking, and the lion’s share of the responsibility for both pieces is falling on the IT professionals. Traditionally, IT and data center managers have had to work through Facilities when they need more power or cooling to support increasing IT needs. That process is being streamlined, thanks in large part to those aforementioned DCIM solutions that increase visibility and control over all aspects of a building’s infrastructure. Forward-thinking data center managers are developing a DCIM strategy to help them understand this expansion of their roles and responsibilities. 8. That patchwork data center needs to be a quilt: In the past, data center managers freely mixed and matched components from various vendors because those systems worked together only tangentially. That is changing. The advent of increasingly intelligent, dynamic infrastructure technologies and monitoring and management systems has increased the amount of actionable data across the data center, delivering real-time modeling capabilities that enable significant operational efficiencies. IT and infrastructure systems still can work independently, but to truly leverage the full extent of their capabilities, integration is imperative. 9. Data center on demand is a reality: The days of lengthy design, order and deployment delays are over. Today there are modular, integrated, rapidly deployable data center solutions for any space. Integrated, virtually plug-and-play solutions that include rack, server and power and cooling can be installed easily in a closet or conference room. On the larger end, containerized data centers can be used to quickly establish a network or to add capacity to an existing data center. The solution to most problems is a phone call away. 10. IT loads vary – a lot: Many industries see extreme peaks and valleys in their network usage. Financial institutions, for example, may see heavy use during traditional business hours and virtually nothing overnight. Holiday shopping and tax seasons also can create unusual spikes in IT activity. Businesses depending on their IT systems during these times need to have the capacity to handle those peaks, but often can operate inefficiently during the valleys. A scalable infrastructure with intelligent controls can adjust to those highs and lows to ensure efficient operation. http://www.datacenterworld.com/preparing-for-10-common-data-center-surprise