Tuesday, August 21, 2012

A Greener Field: SDNs: Love 'em or Leave 'em?

Are Software Defined Networks (SDN) on your short list yet? Two recent surveys suggest it depends on where you work. I say that given our recent experiences with SDN, within five years and everyone will adopt or have definitive plans to adopt SDN technology. Here’s why:

SDNs, for those who’ve been sleeping with the pinecones of late, move intelligence previously locked within proprietary switching and routing hardware into open software. To put that in “networkese,” we separate the control plane from the forwarding plane using a protocol like OpenFlow. As such, SDN delivers on eight benefits:

  • Agility. OpenFlow-based SDNs create flexibility in how the network is used, operated, and sold. The software that governs it can be written by enterprises and service providers using ordinary software environments.
  • Speed. SDNs promote rapid service introduction through customization, because network operators can implement the features they want in software they control, rather than having to wait for a vendor to put it into plans for their proprietary products.
  • Cost Savings. Software Defined Networking lowers operating expenses and results in fewer errors and less network downtime because it enables automated configuration of the network and reduces manual configuration.
  • Better Management. OpenFlow-based SDN enables virtualization of the network, and therefore the integration of the network with computing and storage. This allows the entire IT operation to be governed more sleekly with a single viewpoint and toolset.
  • Planning. OpenFlow can be easily integrated with computing for resource management and maintenance.
  • Focus. OpenFlow-based SDN can better align the network with business objectives.
  • More Competition. As a standard way of conveying flow-table information to the network devices, it fosters open, multi-vendor markets.
  • Table Conversation. Instead of saying you working in networking at the next party, now you can say “I program the enterprise.” Now, how cool is that?

It’s no wonder in that a recent survey of service providers, Infonetics Research found that 80 percent of survey respondents are including OpenFlow in their purchase considerations. Rapid delivery of new services is essential for service providers and it’s a key benefit to SDN.

"While many uncertainties surround SDNs due to their newness, our survey confirms that the programmable network movement is real, with a good majority of service providers worldwide considering or planning purchases of SDN technologies to simplify network provisioning and to create services and virtual networks in ways not previously possible,” notes Michael Howard, co-founder and principal analyst for carrier networks at Infonetics Research.

On the more “curmudgeonary” side of life you have Mike Fratto noting that enterprises haven’t yet gone gaga over the whole SDN thing. “Enterprises aren't really ready for SDN quite yet, as the results from a recent InformationWeek survey of 250 IT professionals showed. Some 70% of respondents said they weren't even going to start testing SDN for at least a year.”

So which is it for you? Are you a service provider kind of guy or an enterprise kind of gal?

Monday, August 20, 2012

Virtualization of Data Centers: New Options in the Control & Data Planes (Part II)


RAGHU KONDAPALLI
LSI Corp.

This Industry Perspectives article is the second in a series of three that analyzes the network-related issues being caused by the Data Deluge in virtualized data centers, and how these are having an effect on both cloud service providers and the enterprise. The focus of the first article was on the overall effect server virtualization is having on storage virtualization and traffic flows in the datacenter network. This article dives a bit deeper into the network challenges in virtualized data centers as well as the network management complexities and control plane requirements needed to address those challenges.

Server Virtualization Overhead

Server virtualization has enabled tens to hundreds of VMs per server in data centers using multi-core CPU technology. As a result, packet processing functions, such as packet classification, routing decisions, encryption/decryption, etc., have increased exponentially. Because discrete networking systems may not scale cost-effectively to meet these increased processing demands, some changes are also needed in the network.

Networking functions that are implemented in software in network hypervisors are not very efficient, because x86 servers are not optimized for packet processing. The control plane, therefore, needs to be scaled somehow by adding communications processors capable of offloading network control tasks, and both the control and data planes stand to benefit substantially from hardware assistance provided by such function-specific acceleration.

The table below shows the effect on packet processing overhead of virtualizing 1,000 servers. As shown, by mapping each CPU core to four virtual machines (VMs), and assuming 1 percent traffic management overhead with a 25 percent east-west traffic flow, the network management overhead increases by a factor of 32 times in this example of a virtualized data center.

traditional-v-virtualized-data-centersClick to enlarge chart.This table shows the effect on network management overhead of virtualizing 1,000 servers.

Virtual Machine Migration

Support for VM migration among servers, either within one server cluster or across multiple clusters, creates additional management complexity and packet processing overhead. IT administrators may decide to move a VM from one server to another for a variety of reasons, including resource availability, quality-of-experience, maintenance, and hardware/software or network failures. The hypervisor handles these VM migration scenarios by first reserving a VM on the destination server, then moving the VM to its new destination, and finally tearing down the original VM.

Hypervisors are not capable of the timely generation of address resolution protocol (ARP) broadcasts to notify of the VM moves, especially in large-scale virtualized environments. The network can even become so congested from the control overhead occurring during a VM migration that the ARP messages fail to get through in a timely manner. With such a significant impact on network behavior being caused by rapid changes in connections, ARP messages and routing tables, existing control plane solutions need an upgrade to more scalable architectures.

Multi-tenancy and Security

Owing to the high costs associated with building and operating a data center, many IT organizations are moving to a multi-tenant model where different departments or even different companies (in the cloud) share a common infrastructure of virtualized resources. Data protection and security are critical needs in multi-tenant environments, which require logical isolation of resources without dedicating physical resources to any customer.

The control plane must, therefore, provide secure access to data center resources and be able to change the security posture dynamically during VM migrations. The control plane may also need to implement customer-specific policies and Quality of Service (QoS) levels.

Service Level Agreements and Resource Metering

The network-as-a-service paradigm requires active resource metering to ensure SLAs are maintained. Resource metering through the collection of network statistics is useful for calculating return on investment, and evaluating infrastructure expansion and upgrades, as well as for monitoring SLAs.

The network monitoring tasks are currently spread across the hypervisor, legacy management tools, and some newer infrastructure monitoring tools. Collecting and consolidating this management information adds further complexity to the control plane for both the data center operator and multi-tenant enterprises.

The next article in the series will examine two ways of scaling the control plane to accommodate these additional packet processing requirements in virtualized data centers.

http://www.datacenterknowledge.com/archives/2012/08/20/virtualization-of-data...

Monday, July 23, 2012

VMware to buy Nicira for $1.26B in a strategic leap of faith — Cloud Computing News

Nicira’s CEO Martin Casado

VMware will shell out $1.26 billion in cash and an assumption of existing equity to buy Nicira, a company that has built software to do for networking what VMware has done for virtualizing computing. That’s a lot of money, especially when you consider how few production-level implementations there are of the software-defined networks that Nicira is building.

Ncira was created five years ago and has raised $50 million from investors that included Diane Greene, an original founder of VMware. It makes controller software that helps free the act of moving data and packets around a network from the constraints of networking hardware  – an increasingly tough problem inside highly virtualized and webscale data centers. As I explained in a story on its launch (it was titled, “Meet Nicira. Yes, people will call it the VMware of networking”):

Nicira is one of several companies attempting to solve the problem that Greene helped create when she co-founded VMware to push hypervisors and virtualization. Once servers were virtualized, it created an easy way to separate computing from the physical infrastructure. The benefits of server virtualization were more-agile compute infrastructures — a developer would spin up a server in minutes as opposed to waiting days for approvals — as well as consolidating IT. Storage followed, but holding the whole virtualized infrastructure effort back was networking. Like a bird with its wings clipped, IT was tethered to the physical hardware by networking.

Nicira has plenty of paying customers including eBay, AT&T, Rackspace and NT, but it’s unclear how many of these are running Nicira’s controllers in their production environment. With this purchase, my sense is that VMware is paying big money to get in on this space because its collaborations with Cisco have failed to deliver on a true solution for many of its customers, and it can’t afford to be left behind as the wave of interest and eventual spending on software-defined networking hits a peak.

In a blog post covering the deal, VMware CTO Steve Herrod explains that Nicira will fit in with VMware’s vision of a software defined data center — although the specifics of that vision have yet to be unveiled. And it’s true that if we consider how Nicira customer Calligo is using the Nicira controller, it fits within what I would think of as a software-defined data center — essentially a pool of compute resources that are tied together with software and where the physically hardware can reside in different data centers and pulled up for use as needed. The whole process is programmable and automated.

In pushing this model, VMware was always going to have to sign partnerships to deliver the software-defined networking elements, because its own VXLAN option was more of a an encapsulation scheme than a true separation of the networking logic from the hardware pushing the bits. By buying Nicira, VMware has chosen a software-defined networking vendor that plays with open protocols such as Open Flow, but is still focused on keep a proprietary edge to its business. That’s a similar strategy that VMware seems to be pursuing as well.

The deal is expected to close during the second half of this year. The deal includes approximately $1.05 billion in cash plus approximately $210 million of assumed unvested equity awards.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Olympic legacy to live on though data center | Datacenter Dynamics

The London Legacy Development Corporation announced another win for East London’s technology dreams, which this time includes a large data center at London’s Olympic Park.

UK data center company Infinity is one of the joint venture members behind the project.

Once the athletes and spectators have vacated the site, which will be renamed the Queen Elizabeth Olympic Park, workers from the JV called iCITY will move in to turn the Press and Broadcast Centre into what they say will become a world-class center for innovation and enterprise.

They claim the project will create 4,600 direct jobs on site as start-ups, media companies, a university, digital academy, restaurants, bars and more move in.

A further 2,000 jobs are expected to be created through the supply chain and consumer spending.

iCITY is a joint venture between real estate investment and advisory company Delancey and Infinity SDC.

Infinity will build a data center that will be between 250,000 to 350,000 sq ft data center with 90MW capacity designed for low-latency, high capacity data center needs.

It will sit on what is currently one of the most digitally connected buildings in the world, set up as a 24-hour media hub for more than 800 journalists covering the Games.

The building was built in ten weeks and includes state-of-the-art utilities, power and digital connectivity, 52,000 sq m of studio space over two double-storey floors.

It also has its own 200m long High Street with banks, travel agents and other facilities, a Transport mall and a 12,000 sq m catering village.

The building’s 2,500 sq m roof is flat and covered with concrete, gravel and moss and logs to encourage local invertebrates and other local wildlife

Infinity SDC CEO Martin Lynch said the new data cener will help the company move into new London markets around digital media but will also target large investment banks, insurers, international  brokerages, telecommunications companies and outsources.

“This is a strategically important move for Infinity that sees us expand into a new sector,” Lynch said.

“The data center at the heart of this development will ensure low latency, high capacity and streaming environments that are ideally suited to animation, post production and broadcasting.”

The park will be open in separate phases from 27 July 2013, following the removal of temporary venues, the building of new roads and bridges and the building of its own neighbourhood.

 

Thursday, July 12, 2012

Finland Wants to Become Sustainable Data Center Hub | Corporate Social Responsibility

Finland Wants to Become Sustainable Data Center Hub

kajaaniKajaani, the capital of the province of Kainuu in central Finland, wants to pitch itself as an ideal place for sustainable data center operations. Region representatives say the area has excellent power infrastructure coming from renewable sources, perfect natural cooling environments, strong fiber connectivity to Western and Eastern Europe internet exchanges, and strong governmental support.

As part of this business campaign, Invest in Kainuu will host an investment forum on October 26. Organized in collaboration with consulting firm BroadGroup, the forum will explore why Northern Europe is attracting companies like Google and Facebook to establish data center facilities there.

The region offers a number of benefits, such as the availability of significant energy capacity (120MW on site renewable and 400MW diverse grid connectivity) and zoned greenfield adjacent land, which is immediately available.

Kajaani has been selected by the Finnish Government to be the home of its new Super Computer Data Centre on a UPM site that previously housed a paper mill, and which is due to go live in the third quarter of 2012. The Super Computer project is due to establish the world's first zero-emissions Super Computer, tapping the local environment in terms of 100 percent non-mechanical cooling (water and air resources).

"As data centers become more integrated with IT and cloud thinking then a different mindset may be applied to datacentre locations," said Steve Wallage, managing director of BroadGroup Consulting. "More broadly, we could think differently about data center procurement, towards smaller deployments at more diverse locations than simply the 'big shed'".

The company said in a recent report that data center location is receiving much wider consideration than previously. It noted that changes in procurement are occurring with corporates increasingly using broader IT and outsourcing specialists, who have a greater willingness to use different locations.

To find out more, visit the forum's website.

Image credit: Wikipedia

The Data Center Maturity Model and its Benefits to Data Center Owners and Operators - YouTube

Wednesday, June 6, 2012

#DCIM Yields Return on Investment

DCIM Yields Return on Investment

By: Michael Potts

As with any investment in the data center, the question of the return on the investment should be raised before purchasing a Data Center Infrastructure Management (DCIM) solution. In the APC white paper, “How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Costs,” the authors highlight the savings from a DCIM solution saying, “The deployment of modern planning tools can result in hundreds of man-hours saved per year and thousands of dollars saved in averted downtime costs.”

DCIM will not transform your data center overnight, but it will begin the process. While it isn’t necessary to reach the full level of maturity before seeing benefits, the areas of benefit are significant and can bring results in the short-term. The three primary methods in which DCIM provides ROI are:

  • Improved Energy Efficiency
  • Improved Availability
  • Improved Manageability

DCIM LEADS TO IMPROVED ENERGY EFFICIENCY

In his blog, Dan Fry gets right to the heart of DCIM’s role in improving energy efficiency when he says, “To improve energy efficiency inside the data center, IT executives need comprehensive information, not isolated data. They need to be able to ‘see’ the problem in order to manage and correct it because, as we all know, you can’t manage what you don’t understand.”

The information provided by DCIM can help data center managers in reducing energy consumption:

MATCHING SUPPLY WITH DEMAND

Oversizing is one of the biggest roadblocks to energy efficiency in the data center. In an APC survey of data center utilization, only 20 percent of respondents had a utilization of 60 percent or more, while 50 percent had a utilization of 30 percent or less. One of the primary factors for oversizing is the lack of power and cooling data to help make informed decisions on the amount of infrastructure required. DCIM solutions can provide information on both demand and supply to allow you to “right-size” the infrastructure, reducing overall energy costs by as much as 30 percent.

IDENTIFYING UNDER-UTILIZED SERVERS

As many as 10 percent of servers are estimated to be “ghost servers,” servers which are running no applications, yet still consume 70 percent or more of the resources of a fully-utilized server. DCIM solutions can help to find these under-utilized servers Which could be decommissioned, re-purposed or consolidated as well as servers which do not have power management functionality enabled, reducing IT energy usage as well as delaying the purchase of additional servers.

MEASURING THE IMPACT OF INFRASTRUCTURE CHANGES

DCIM tools can measure energy efficiency metrics such as Power Usage Effectiveness (PUE), Data Center Infrastructure Efficiency (DCiE) and Corporate Average Datacenter Efficiency (CADE). These metrics serve to focus attention on increasing the energy efficiency of data centers and to measure the results of changes to the infrastructure. In the white paper “Green Grid Data Center Power Efficiency Metrics: PUE and DCiE,” the authors lay out the case for the introduction of metrics to measure energy efficiency in the data center. The Green Grid believes that several metrics can help IT organizations better understand and improve the energy efficiency of their existing data centers as well as help them make smarter decisions on new data center deployments. In addition, these metrics provide a dependable way to measure their results against comparable IT organizations.

IMPROVED AVAILABILITY

DCIM solutions can improve availability in the following areas:

Understanding the Relationship Between Devices
A DCIM solution can help to answer questions such as “What systems will be impacted if I take the UPS down for maintenance?” It does this by understanding the relationship between devices, including the ability to track power and network chains. This information can be used to identify single points of failure and reduce downtime due to both planned and unplanned events.

Improved Change Management
When investigating an issue, examination of the asset’s change log allows problem managers to recommend a fix over 80 percent of the time, with a first fix rate of over 90 percent. This reduces the mean time to repair and increases system availability. DCIM systems which automate the change management process will log both authorized and unauthorized changes, increasing the data available to the problem manager and increasing the chances the issue can be quickly resolved.

Root Cause Analysis
One of the problems sometimes faced by data center managers is too much data. Disconnecting a router from the network might cause tens or hundreds of link lost alarms for the downstream devices. It is often difficult to find the root cause amidst all of the “noise” associated with cascading events. By understanding the relationship between devices, DCIM solution can help to narrow the focus to the single device — the router, in this case — which is causing the problem.  By directing focus on the root cause, the problem can be resolved more quickly, reducing the associated downtime.

IMPROVED MANAGEABILITY

DCIM solutions can improve manageability in the following areas:

Data Center Audits
Regulations such as Sarbanes-Oxley, HIPA and CFR-11 increase the requirements for physical equipment audits. DCIM solutions provide a single source of the data to greatly reduce the time and cost to complete the audits. Those DCIM tools utilizing asset auto-discovery and asset location mechanisms such as RFID can further reduce the effort to perform a physical audit.

Asset Management
DCIM can be used to determine the best place to deploy new equipment based on the availability of rack space, power, cooling and network ports. It then can be used to track all of the changes from the initial request through deployment, system moves and changes, all the way through to decommissioning. The DCIM solution can provide detailed information on thousands of assets in the data center including location, system configuration, how much power it is drawing, relationship to other devices, and so on, without having to rely on spreadsheets or home-grown tools.

Capacity Planning
With a new or expanded data center representing a substantial capital investment, the ability to postpone new data center builds could save millions of dollars. DCIM solutions can be used to reclaim capacity at the server, rack and data center levels to maximize space, power and cooling resources. Using actual device power readings instead of the overly conservative nameplate values will allow an increase in the number of servers supported by a PDU without sacrificing availability. DCIM tools can track resource usage over time and provide much more accurate estimates of when additional equipment needs to be purchased.


This is the fifth article in the Data Center Knowledge Guide to DCIM series. To download the complete DCK Guide to DCIM click here.

Thursday, May 31, 2012

Selecting a #DCIM Tool to Fit your #DataCenter ?

How Do I Select a DCIM Tool to Fit My Data Center?

  • By: Michael Potts

Dcim_focus_21_version_2_2

Although similar in many respects, every data center is unique. In choosing a Data Center Infrastructure Management (DCIM) solution, data center managers might choose very different solutions based on their needs.  It is somewhat analogous to two people choosing a lawn care service. One might simply want the grass mowed once a week.  The other might want edging, fertilizing, seeding and other services in addition to mowing.  As a result, they may choose different lawn service companies or, at the least, expect to pay very different amounts for the service they will be receiving.  Before choosing a DCIM solution, it is important to first know what it is you want to receive from the solution.

It is also important to remember that DCIM cannot single-handedly do the job of data center management.  It is only part of the overall management solution. While the DCIM tools, or sometimes a suite of tools working together, are a valuable component, a complete management solution must also incorporate procedures which allow the DCIM tools to be effectively used.

CHOOSING A DCIM SOLUTION

It is important to remember that DCIM solutions are about providing information. The question which must be asked (and answered) prior to choosing a DCIM solution is “What information do I need in order to manage my data center?” The answer to this question is the key to helping you choose the DCIM solution which will best suit your needs. Consider the following two data centers looking to purchase a DCIM solution.

DATA CENTER A

Data Center A has a lot of older, legacy equipment which is being monitored using an existing Building Management System (BMS). The rack power strips do not have monitoring capability. The management staff currently tracks assets using spreadsheets and Visio drawings. The data has not been meticulously maintained, however, and has questionable accuracy. The primary management goal is getting a handle on the assets they have in the data center.

DATA CENTER B

Data Center B is a new data center. It has new infrastructure equipment which can be remotely monitored through Simple Network Management Protocol (SNMP). The racks are equipped with metered rack PDUs. The primary management goals are to (1) collect and accurately maintain asset data, (2) monitor and manage the power and cooling infrastructure, and (3) monitor server power and CPU usage.

DIFFERENT DCIM DEPLOYED

While both data center operators would likely benefit from DCIM, they may very well choose different solutions. The goal for Data Center A is to more accurately track the assets in the data center. They may choose to pre-load the data they have in spreadsheets and then verify the data. If so, they will want a DCIM which will allow them to load data from spreadsheets. If they feel their current data is not reliable, they may instead choose to start from ground zero and collect all of the data manually.

If so, loading the data from a spreadsheet might be a desirable feature but is no longer a hard requirement.  Since the infrastructure equipment is being monitored using a BMS, they might specify integration with their existing BMS as a requirement for their DCIM.

Data Center B has entirely different requirements. It doesn’t have existing data in spreadsheets, so they need to collect the asset data as quickly and accurately as possible. They may specify auto-discovery as a requirement for their DCIM solution. In addition, they have infrastructure equipment which needs to be monitored, so they will want the DCIM to be able to collect real-time data down to the rack level. Finally, they want to be able to monitor server power and CPU usage, so they will want a DCIM which can communicate with their servers.

Prior to choosing a DCIM solution, spend time determining what information is required to manage the data center. Start with the primary management goals such as increasing availability, meeting service level agreements, increasing data center efficiency and providing upper-level management reports on the current and future state of the data center. Next, determine the information that you need to accomplish these high-level goals. A sample of questions you might ask includes the following:

  • What data do I need to measure availability?
  • What data do I need to measure SLA compliance?
  • What data do I need to measure data center efficiency?
  • What data do I need to forecast capacity of critical resources?
  • What data do I need for upper-level management reports?

DEFINING REQUIREMENTS

These questions will begin to define the scope of the requirements for a DCIM solution. As you start to narrow down the focus of the questions, you will also be defining more specific DCIM requirements.

For example, you might start with a requirement for the DCIM to provide real-time monitoring. This is still rather vague, however, so additional questions must be asked to narrow the focus.

How do you define “real-time” data? To some, real-time data might mean thousands of data points per second with continuous measurement. To others, it might mean measuring data points every few minutes or once an hour. There is a vast difference between a system which does continuous measurement and one which measures once an hour. Without knowing how you are going to use the data, you will likely end up buying the wrong solution. Either you will purchase a solution which doesn’t provide the data granularity you want or you will over-spend on a system which provides continuous measurement when all you want is trending data every 15 minutes.

What data center equipment do you want to monitor?
 The answer to this question may have the biggest impact on the solution you choose. If you have some data center equipment which communicates using SNMP and other equipment which communicates using Modbus, for example, you will want to choose a DCIM solution which can speak both of these protocols. If you want the DCIM tool to retrieve detailed server information, you will want to choose a DCIM solution which can speak IPMI and other server protocols. Prior to talking to potential DCIM vendors, prepare a list of equipment with which you want to retrieve information.

Similar questions should be asked for each facet of DCIM — asset management, change management, real-time monitoring, workflow, and so on — to form a specific list of DCIM requirements. Prioritize the information you need so you can narrow your focus to those DCIM solutions which address your most important requirements.

http://www.datacenterknowledge.com/archives/2012/05/31/selecting-dcim-tools-f... 

Integration-approach-to-dcim-yields-best-results-image-1

Thursday, May 24, 2012

Why Do I Need #DCIM ?

by Micahel Potts

There are a number of benefits in implementing a Data Center Infrastructure System (DCIM) solution.  To illustrate this point, consider the primary components of data center management.

In the Design phase, DCIM provides key information in designing the proper infrastructure.  Power, cooling and network data at the rack level help to determine the optimum placement of new servers.  Without this information, data center managers have to rely on guesswork to make key decisions on how much equipment can be placed into a rack.  Too little equipment strands valuable data center resources (space, power and cooling).  Too much equipment increases the risk of shutdown due to exceeding the available resources.

In the Operations phase, DCIM can help to enforce standard processes for operating the data center.  These consistent, repeatable processes reduce operator errors which can account for as much as 80% of system outages.

In the Monitoring phase, DCIM provides operational data, including environmental data (temperature, umidity, air flow), power data (at the device, rack, zone and data center level), and cooling data.  In addition, DCIM may also provide IT data such as server resources (CPU, memory, disk, network).  This data can be used to alert management when thresholds are exceeded, reducing the mean time to repair and increasing availability.

In the Predictive Analysis phase, DCIM analyzes the key performance indicators from the monitoring phase as key input into the planning phase. Capacity planning decisions are made based during this phase.  Tracking the usage of key resources over time, for example, can provide valuable input to the decision on when to purchase new power or cooling equipment.

In the Planning phase, DCIM can be used to analyze “what if” scenarios such as server refreshes, impact of virtualization, and equipment moves, adds and changes. If you could summarize DCIM in one word, it would be information.  Every facet of data center management revolves around having complete and accurate information.

DCIM provides the following benefits:

•  Access to accurate, actionable data about the current state and future needs of the data center

•  Standard procedures for equipment changes

•  Single source of truth for asset management

•  Better predictability for space, power and cooling capacity means increased time to plan

•  Enhanced understanding of the present state of the power and cooling infrastructure and environment increases the overall availability of the data center

•  Reduced operating cost from energy usage effectiveness and efficiency

In his report, Datacenter Infrastructure Management Software: Monitoring, Managing and Optimizing the Datacenter, Andy Lawrence summed up the impact of DCIM by saying “We believe it is difficult to achieve the more advanced levels of datacenter maturity, or of datacenter effectiveness generally, without extensive use of DCIM software.”  He went on to add that “The three main drivers of nvestment in DCIM software are economics (mainly through energy-related savings), improved availability, and mproved manageability and flexibility.”

One of the primary benefits of DCIM is the ability to answer questions such as the following:

1. Where is my data center asset located?

2. Where is the best place to place a new server?

3. Do I have sufficient space, power, cooling and network connectivity to provide my needs for the next months?  Next year?  Next five years?

4. An event occurred in the data center — what happened, what services are impacted, where should the technicians go to resolve the issue?

5. Do I have underutilized resources in my data center?

6. Will I have enough power or cooling under fault or maintenance conditions?

Without the information provided by DCIM, the questions become much more difficult to answer.

Friday, May 18, 2012

#Datacenters are becoming software defined

‘Data centers are becoming software defined’ 
Data centers around the world are increasingly being virtualized and organizations are restructuring business processes in line with infrastructure transformations. Raghu Raghuram, SVP & GM, Cloud Infrastructure and Management, VMware tells InformationWeek about the software defined data center, and how virtualized infrastructure will bring in more flexibility and efficiency

 By Brian Pereira, InformationWeek, May 18, 2012

What are the transformations that you observe in the data center? Can you update us on the state of virtualization?

Firstly, there is a transformation from physical to virtual, in all parts of the world. In terms of workloads in the data center, the percent (of workloads) running on virtualized infrastructure as against physical servers has crossed 50 percent. So, there are more applications running in the virtual environment, which means the operating system no longer sees the physical environment. This is a huge change in the data center. The virtual machine has not only become the unit of computing but also the unit of management.

Earlier, operational processes were built around a physical infrastructure but now these are built around virtualized infrastructure. The organization of the data center team is also changing. Earlier, you’d have an OS team, a server team, and teams for network, storage etc. Now virtualization forces all these things to come together. So, it is causing changes not only in the way hardware and software works, but also in the people and processes.

The second change is that data center architecture has gone from vertical to horizontal. You have the hardware and the virtualization layer with applications on top of it. Hence, you can manage the infrastructure separately from managing the applications. You can also take the application from your internal data center and put it on Amazon Web services or the external cloud. Thus, the nature of application management has changed.

How is the management of data center infrastructure changing?

The traditional and physical data center was built on the notions of vertical integration/silos. You’d put agents in the application and at the hardware and operating system levels. And then you’d pull all that information together (from the agents) and create a management console. And when the next application came into the data center you’d create another vertical stack for it, and so on. This led to the proliferation of management tools. And there was a manager of all the management tools. As a result, you were layered on more complexity instead of solving the management problem. The second problem was that operating systems did not have native manageability built into them. So, management was an afterthought. With virtualization, we were the first modern data center platform. We built manageability into the platform. For instance, the VMware distributed resource scheduler (DRS) automatically guarantees resources — you don’t need an external workload manager. We have built in high availability so you don’t need an external clustering manager. Our goal has been to eliminate management, and wherever possible turn it into automation.

We are going from a world of agents and reactive type of management to a world of statistical techniques for managing data. One of our customers has almost 100,000 virtual machines and they generate multiple million metrics every five minutes. There’s a flood of information, so you can’t use the conventional management way of solving a problem. You need to do real-time management and collect metrics all the time. We use statistical learning techniques to understand what’s going on. It is about proactive management and spotting problems before these occur. And this is a feature of VMware vCenter Operations.

What is the Software defined data center all about?

This is a bit forward looking. Increasingly, the data center is being standardized around x86 (architecture). And all the infrastructure functions that used to run on specialized ASICs (Application Specific Integrated Circuits) are now running on standard x86 hardware and being implemented via software as virtual machines. For instance, Cisco and CheckPoint are shipping virtual firewalls; HP is shipping virtual IDS devices; and RiverBed is shipping virtual WAN acceleration (optimization). All of it is becoming virtualized software; now the entire data center is becoming one form factor, on x86. As it is all software, they can be programmed more easily. And hence it can be automated. So, when an application comes into the data center, you automatically provision the infrastructure that it needs. You can grow/shrink that infrastructure. Scale it up or out. And configure policies or move them as the application moves.

All of this constitutes the software defined data center. It’s a new way of automating and providing the infrastructure needed for applications, as the applications themselves scale and react to end users. We see that rapidly emerging.

This is a concept in the large cloud data centers, but we want to bring it to mainstream enterprises and service providers.

VMware is a pioneer for virtualization of servers. But what are you offering for virtualized networking, security and storage?

There are smaller players such as Nicira Networks, which are actively pursuing network virtualization. Last year we announced (in collaboration with Cisco) a technology called VXLAN (Virtual eXtensible LAN). The difference between us and Nicira (network virtualization) is that we are doing it so that it works well with existing networking gear. We are creating a virtual network that is an overlay to the physical network. As the applications are set up, the networking can be done in a virtualized fashion. Load balancing and all the network services can happen in a virtualized fashion, without the need to reconfigure the physical network.

But you also need virtualized security services and virtualized network services for this. We have a line of products called vShield that offers this. It has a load balancer, NAT edge and stateful firewall; application firewall etc. Then, you have the management stack that ties it all together. We call this software defined networking and security. And we are doing the same thing with storage with Virtual Storage Appliance. We also offer storage provisioning automation with vSphere and vCloud Director.

What is the key focus for VMware this year?

Our slogan is business transformation through IT transformation. We want to enable businesses to transform themselves, by transforming IT. We talk about transformations with mobility, new application experiences, and modernizing infrastructure to make it more agile and cost-effective. These are the fundamental transformations that engage us with customers. So, it starts with virtualization but it doesn’t stop there. The faster customers progress through virtualization, the faster they can progress through the remaining stages of the transformation.

Wednesday, May 16, 2012

International #DataCenter Real Estate Overview

International Data Center Real Estate Overview

 May 16, 2012
International Data Center Real Estate Overview

Although the U.S. still remains in some sense the hub of the data center market, other regions around the world are exhibiting their own dynamics, particularly in Asia and Europe. Demand for data center services has yet to plateau, so companies are continually needing to expand their IT capabilities, whether through new data center construction or expansion or through outsourcing to the cloud (meaning another company somewhere must have or add data center capacity). Thus, demand for data center real estate is correspondingly strong—but, naturally, it varies around the globe depending on a variety of factors. The following are some key international areas in the data center real estate market.

Europe

Given its current fiscal straits, Europe exemplifies the present overall strength of the data center market. The continent is currently struggling to resolve its crushing debt load and to determine whether it will continue as a consolidated entity (the EU) or as separate states. A breakup of the EU may well be in the offing, as CNBC reports (“Stocks Post Loss on Greece, S&P at 3-Month Low”): “‘I think people need to prepare for the eventual removal of Greece from the EU and investors are getting ahead of that before they’re forced to,’ said Matthew McCormick, vice president and portfolio manager at Bahl & Gaynor Investment Counsel.” Greece may be the first—but not last—nation to leave or be booted from the union.

But despite these economic and political problems, the data center industry is still seeing growth in this region. In the colocation sector, service provider Interxion reported good news for the first quarter of this year, according to DatacenterDynamics (“Interxion reports strong Q1 results despite Europe’s economy”): Interxion’s CEO, David Ruberg, stated, “Recurring revenue increased by more than 4% over the quarter ended December 31, 2011, and strong bookings in the quarter reflect a continued healthy market for our services, despite sustained economic weakness in Europe.”

Furthermore, even though Europe is in the midst of a financial crisis, possibly spilling over to a political one, portions of it remain relatively low-risk locations for new data centers, according to Cushman & Wakefield and hurleypalmerflatt. The Data Centre Risk Index 2012 ranks the U.K. and Germany as second and third, respectively, for lowest-risk regions to build data centers (behind the U.S.). This report examines risks such as political instability, energy costs, potential for natural disasters and other factors that could endanger a data center operation.

Given the increasing reliance of western economies on IT services provided by data centers, the real estate market will likely withstand minor economic or even political reorganization in Europe. Of course, should the economic problems result in a more serious situation, all bets are off.

Nordic Region

Within Europe, the Nordic countries are a growing market all their own. Offering a cool climate (great for free cooling to reduce energy consumption) and (in some areas) abundant renewable energy, these nations are an increasingly attractive (and, concomitantly, less risky) option for companies looking to build new facilities. On the 2012 Data Centre Risk Index, Iceland ranked an impressive number four (even despite its recent volcanic activity that shut down many European airports); Sweden ranked eighth, followed by Finland at nine and Norway at twelve.

Nevertheless, even though the region has seen expansion of the data center market this year, not everything works in its favor: according to DatacenterDynamics (“Nordics make strong entrance in data center risk index”), “Norway…ranked as the most politically stable country and also measured a high availability of natural resources and renewable energy sources but its high cost of labour and relatively low connectivity pushed it down on the list.” Iceland was cited for political instability and a lack of bandwidth capacity as working against it. Overall, however, the Nordic countries are the rising star of the European region.

Asia

Asia represents the area of greatest expansion in the data center market, as the Data Center Journal reported (“Fastest-Growing Data Center Market in 2012”). In particular, Hong Kong, Shanghai and Singapore demonstrate the strongest growth, but other areas are also growing. Asian nations do not yet match western nations—particularly the U.S.—in overall development, but their large populations (particularly in China and India) and growing demand for IT services are driving demand for data center space. On Cushman & Wakefield and hurleypalmerflatt’s Data Centre Risk Index for 2012, Hong Kong placed highest among Asian regions, ranking seventh. South Korea ranked 13, Thailand 15 and Singapore 17. China and India, despite their growth potential, ranked near the bottom of the list: 26th place for China and 29th for India out of 30 evaluated nations.

Despite the risks, China in particular is seeing growth, partly as companies from other nations (like major corporations in the U.S., including IBM and Google) build facilities in hopes of tapping the emerging markets in the region.

South America

South America is another region with mixed conditions, like Asia. Despite its own significant growth, the region poses many risks to companies building data centers. Brazil, the only South American nation represented in the Data Centre Risk Index scored at the bottom of the heap. DatacenterDynamics (“Report: Brazil is riskiest data center location”) notes that although “the report’s authors based their judgment on more than a dozen parameters, high energy cost and difficulty of doing business stood out as key risk factors in operating data centers in Brazil.” Other risk factors, such as political instability and high corporate taxes, also weighed the nation to the bottom of the rankings. Nevertheless, Brazil will likely lead in growth in this region, according to the Cushman & Wakefield and hurleypalmerflatt report. In addition, Mexico will also see significant growth (the nation only ranked a few slots above Brazil according to risk). Although Mexico is technically not geographically a part of South America, it may be best lumped with that region.

Middle East and Africa

The only country outside the above-mentioned regions that ranks in the Data Centre Risk Index is the Middle Eastern nation of Qatar, which ranked a surprising sixth place, just behind Canada. Needless to say, few nations in this region represent prime data center real estate, owing to political instability, ongoing wars and other factors. Populations in these regions are demanding IT services, and opportunities are available, but pending some relief from strife (particularly in the Middle East, but also in some African nations), growth will be restrained.

Data Center Market Conclusions

Growth in the data center real estate market is still strong in North America, as businesses and consumers continue to demand more and more services. Europe, despite is economic difficulties (and the U.S. isn’t far behind), is nevertheless seeing growth as well. Asia, concomitant with its emerging markets, is the growth leader in the data center sector (meaning certain portions of it—it is a huge region). But these conditions tend to indicate that the data center market overall is simply in its growth stage. Eventually, growth will level out as rising demand meets the ceiling of resource (particularly energy) availability.

Photo courtesy of Martyn Wright

Tuesday, May 15, 2012

#Uptime: Greenpeace wants #Datacenter industry to do more

Analyst says energy efficiency is great but it is not enough

15 May 2012 by Yevgeniy Sverdlik - DatacenterDynamics

 

A Greenpeace analyst commended the data center industry for gains in energy efficiency it had made over the recent years, but said the environmentalist organization wanted the industry to do more.

Uptime: Greenpeace wants data center industry to do more
Gary Cook, senior IT analyst, Greepeace.

“With all respect to the great amount of progress you’ve made in energy efficient design … we’re asking you to do more,” Gary Cook, senior IT analyst at Greenpeace, said during a keynote address at the Uptime Institute’s annual symposium in Santa Clara, California, Monday.

“You have an important role to play in changing our economy,” he said. The world is becoming increasingly reliant on data centers, and both governments and energy companies are working hard to attract them.

Greenpeace wants data center operators to prioritize clean energy sources for their power and to demand cleaner fuel mix from their energy providers.

Citing figures from a report by the non-profit Climate Group, Cook said the IT industry was responsible for about 2% of global carbon emissions. Applying IT could result in a reduction of carbon emissions by 15%, however, the same report concluded.

These applications include examples like telecommuting instead of driving or sending an email instead of delivering a physical letter.

If the data centers the world is already so dependent on and will become more so would run on clean energy, “this could be a huge win,” Cook said. People in this room could be leading the charge in driving the clean-energy economy.”

To help the data center industry identify clean energy sources, Greenpeace is planning to create a Clean Energy Guide for data centers, Cook said. The guide will evaluate renewable energy choices for key data center regions.

In April, Greenpeace released a report titled “How clean is my cloud”, where it ranked 15 companies based on their energy policies. Rating categories included the amount of coal and nuclear energy they used, the level of transparency about their energy use, infrastructure-siting policy, energy efficiency and greenhouse-gas mitigation and the use of renewable energy and clean-energy advocacy.

This was the second such report the organization had put out.

Of the 15 well-known companies, Amazon, Apple and Microsoft were identified as companies relying on dirty fuel. Google, Facebook and Yahoo! received more positive reviews from Greenpeace.

Response from the industry was mixed. Companies that received high marks were proud of the achievement and companies that did not either declined to comment or questioned accuracy of the calculcations Greenpeace used to arrive at its conclusions.

Cook mentioned Facebook during his keynote at the symposium, saying the company had improved in the environmentalist organization’s eyes. While its Oregon and North Carolina data centers still rely heavily on coal energy, the company’s choice to locate its newest data center in Sweden, where the energy mix is relatively clean, was a turn in the right direction.

In a statement issued in December 2011, Facebook announced a commitment to eventually power all of its operations with clean and renewable energy. Cook said the decision to build in Sweden was evidence that the company’s commitment was real.

Thursday, May 10, 2012

Hydrogen-Powered #DataCenters ?

by Jeff Clark

 


Hydrogen-Powered Data Centers?

Although hydrogen generally doesn’t come up in a discussion of alternative energy sources, it is a topic relevant to cleaner energy use. So, what’s the difference, and what is hydrogen’s potential role in the data center? Apple, for instance—in addition to building a 20 megawatt solar farm—is also planning a large hydrogen fuel cell project at its Maiden, North Carolina, facility. Can hydrogen sate the data center industry’s ever growing power appetite?

Hydrogen: A Storage Medium

To get a good idea of the basic properties of hydrogen, just think of the Hindenburg: the giant German airship (dirigible) that plunged to the Earth in flames in 1937. The airship gained its lift from hydrogen gas: a very light (i.e., not dense), inflammable gas. Although hydrogen is plentiful (think water, hydrocarbons and so on), it is seldom found in its diatomic elemental form (H2). So, unlike coal, for example, hydrogen is not a readily obtainable fuel source. It can, however, be used as a means of storing or transporting energy—and this is its primary use. As a DatacenterDynamics interview with Siemens (“Using hydrogen to store energy”) notes, “Hydrogen is a multi-purpose energy carrier… Also, hydrogen as a storage medium is a concept that has already been tested in several domains.”

Hydrogen is thus in some ways like gasoline: it is simply a chemical that certain types of equipment can convert into energy and various byproducts. But it’s the nature of these byproducts that makes hydrogen appealing.

Clean Energy

Under ideal conditions, the burning of hydrogen (think Hindenburg) produces water and heat as its only products, making its use in internal combustion engines preferable (in this sense, at least) to fossil fuels. But even more useful would be a process that converts hydrogen more directly into energy and water—enter the fuel cell. A fuel cell splits hydrogen into protons and electrons, creating an electrical current. The protons combine with oxygen in a catalytic environment to yield water. What more could a data center ask for? (For a simple animation depicting the operating principles of a fuel cell, see the YouTube video Hydrogen Fuel Cell.)

The fuel cell produces electricity as long as hydrogen fuel is supplied to it. Its characteristics from a physical standpoint are nearly ideal: electricity on demand with virtually no production of carbon compounds or other emissions. Because hydrogen can be stored, it represents energy that can be consumed as needed, not necessarily right away (as in the case of solar or wind power). Sounds great—but as always, there are some caveats.

Getting Your Hands on Hydrogen

As mentioned above, hydrogen does not exist naturally in a manner that makes it readily available as a fuel. Practically speaking, it must be produced from other materials, such as water or fossil fuels. The two main processes are electrolysis of water, whereby an electric current splits water molecules into elemental oxygen (O2) and hydrogen (H2), and steam reforming of hydrocarbons. In each case, energy input of some kind is required, either in the form of electrical energy to electrolyze water or in the form of stored chemical energy in the form of a hydrocarbon. Electrolysis is one means of storing energy from renewable resources like solar or wind, avoiding entirely the need for mined resources like natural gas or coal. Naturally, the efficiency of these processes varies depending on the particulars of the process, the equipment used and so forth.

Alternative processes for generating hydrogen are under investigation—such as biomass production—but these processes do not yet generate hydrogen practically on large scales. Whatever the generation approach, however, the gas must then be stored for transport or for later use.

Storing Hydrogen—A Slight Problem

Hydrogen is an inflammable gas (again, think Hindenburg), but it is not necessarily more dangerous than, say, gasoline vapors. The main problem with storing hydrogen is that compared with other fuels—such as gasoline—it contains much less energy per unit volume (even though it contains more energy per unit mass). Practical (in terms of size) storage requires that the hydrogen be compressed, preferably into liquid form for maximum density. And herein lies the main difference relative to liquid fossil fuels: the container not only holds an inflammable material, but it is also pressurized, creating its own unique challenges and dangers. Fuel leakage into the atmosphere is more problematic, and some environmentalists even claim that this leakage, were hydrogen used on a large scale, could have harmful repercussions on the environment.

Even in liquid form, hydrogen still lags other fuels in energy stored per unit volume. Thus, when implemented in automobiles, for instance, fuel-cell-powered automobiles lack the range of conventional gasoline-powered vehicles. And then there’s the cost of fuel cell technology, which is currently prohibitive. Claims of falling fuel cell prices are dubious, given the unsustainable subsidies from the federal government and some states (like California).

Hydrogen for Data Centers

Apple’s Maiden data center is the highest-profile facility implementing fuel cell technology. According to NewsObserver.com (“Apple plans nation’s biggest private fuel cell energy project at N.C. data center”), Apple will generate hydrogen from natural gas and will employ 24 fuel cell modules. The project is slated for an output of 4.8 megawatts—much less than the data center’s total power consumption, but still a sizable output.

The use of natural gas to generate hydrogen still creates carbon emissions, so this project won’t satisfy everyone (although whether carbon dioxide is as bad as its current politicized reputation would suggest is hardly certain). Nevertheless, like Apple’s large solar farm at the same site, this hydrogen fuel cell project will be a good test of the practicability of hydrogen in the context of data centers.

Hydrogen: Will Most Companies Care?

Jumping into the power generation arena is not something most companies (particularly small and midsize companies) can afford to do—let alone have an interest in doing. So, pending availability of some affordable, prepackaged hydrogen fuel cell system, don’t expect most companies to deploy such a project at their data center sites. Currently, large companies like Apple and Google are among the few dabbling in energy in addition to their primary business. Most companies will, no doubt, prefer to simply plug their data centers into the local utility and let someone else worry about where the energy comes from—these companies wish to focus on their primary business interests.

Conclusion: What Exactly Does Hydrogen Mean for the Data Center?

Hydrogen fuel cells offer some major improvements in controlling emissions, and hydrogen delivers some benefits as a means of storing and transporting energy. But fuel cell technology lacks the economic efficiency of other, traditional power sources, so it has a ways to go before it can attain the status of coal or nuclear, or even smaller-scale sources like solar. Furthermore, the applicability of hydrogen (as such) to data centers is unclear. Power backup seems the most likely present candidate for application of hydrogen and fuel cells. In time, Apple’s project may demonstrate the practicality of electricity via natural gas as another possibility. Until then, however, the industry must wait to see whether this technology matures—and becomes economically feasible.

Photo courtesy of Zero Emission Resource Organisation