Thursday, May 31, 2012

Selecting a #DCIM Tool to Fit your #DataCenter ?

How Do I Select a DCIM Tool to Fit My Data Center?

  • By: Michael Potts

Dcim_focus_21_version_2_2

Although similar in many respects, every data center is unique. In choosing a Data Center Infrastructure Management (DCIM) solution, data center managers might choose very different solutions based on their needs.  It is somewhat analogous to two people choosing a lawn care service. One might simply want the grass mowed once a week.  The other might want edging, fertilizing, seeding and other services in addition to mowing.  As a result, they may choose different lawn service companies or, at the least, expect to pay very different amounts for the service they will be receiving.  Before choosing a DCIM solution, it is important to first know what it is you want to receive from the solution.

It is also important to remember that DCIM cannot single-handedly do the job of data center management.  It is only part of the overall management solution. While the DCIM tools, or sometimes a suite of tools working together, are a valuable component, a complete management solution must also incorporate procedures which allow the DCIM tools to be effectively used.

CHOOSING A DCIM SOLUTION

It is important to remember that DCIM solutions are about providing information. The question which must be asked (and answered) prior to choosing a DCIM solution is “What information do I need in order to manage my data center?” The answer to this question is the key to helping you choose the DCIM solution which will best suit your needs. Consider the following two data centers looking to purchase a DCIM solution.

DATA CENTER A

Data Center A has a lot of older, legacy equipment which is being monitored using an existing Building Management System (BMS). The rack power strips do not have monitoring capability. The management staff currently tracks assets using spreadsheets and Visio drawings. The data has not been meticulously maintained, however, and has questionable accuracy. The primary management goal is getting a handle on the assets they have in the data center.

DATA CENTER B

Data Center B is a new data center. It has new infrastructure equipment which can be remotely monitored through Simple Network Management Protocol (SNMP). The racks are equipped with metered rack PDUs. The primary management goals are to (1) collect and accurately maintain asset data, (2) monitor and manage the power and cooling infrastructure, and (3) monitor server power and CPU usage.

DIFFERENT DCIM DEPLOYED

While both data center operators would likely benefit from DCIM, they may very well choose different solutions. The goal for Data Center A is to more accurately track the assets in the data center. They may choose to pre-load the data they have in spreadsheets and then verify the data. If so, they will want a DCIM which will allow them to load data from spreadsheets. If they feel their current data is not reliable, they may instead choose to start from ground zero and collect all of the data manually.

If so, loading the data from a spreadsheet might be a desirable feature but is no longer a hard requirement.  Since the infrastructure equipment is being monitored using a BMS, they might specify integration with their existing BMS as a requirement for their DCIM.

Data Center B has entirely different requirements. It doesn’t have existing data in spreadsheets, so they need to collect the asset data as quickly and accurately as possible. They may specify auto-discovery as a requirement for their DCIM solution. In addition, they have infrastructure equipment which needs to be monitored, so they will want the DCIM to be able to collect real-time data down to the rack level. Finally, they want to be able to monitor server power and CPU usage, so they will want a DCIM which can communicate with their servers.

Prior to choosing a DCIM solution, spend time determining what information is required to manage the data center. Start with the primary management goals such as increasing availability, meeting service level agreements, increasing data center efficiency and providing upper-level management reports on the current and future state of the data center. Next, determine the information that you need to accomplish these high-level goals. A sample of questions you might ask includes the following:

  • What data do I need to measure availability?
  • What data do I need to measure SLA compliance?
  • What data do I need to measure data center efficiency?
  • What data do I need to forecast capacity of critical resources?
  • What data do I need for upper-level management reports?

DEFINING REQUIREMENTS

These questions will begin to define the scope of the requirements for a DCIM solution. As you start to narrow down the focus of the questions, you will also be defining more specific DCIM requirements.

For example, you might start with a requirement for the DCIM to provide real-time monitoring. This is still rather vague, however, so additional questions must be asked to narrow the focus.

How do you define “real-time” data? To some, real-time data might mean thousands of data points per second with continuous measurement. To others, it might mean measuring data points every few minutes or once an hour. There is a vast difference between a system which does continuous measurement and one which measures once an hour. Without knowing how you are going to use the data, you will likely end up buying the wrong solution. Either you will purchase a solution which doesn’t provide the data granularity you want or you will over-spend on a system which provides continuous measurement when all you want is trending data every 15 minutes.

What data center equipment do you want to monitor?
 The answer to this question may have the biggest impact on the solution you choose. If you have some data center equipment which communicates using SNMP and other equipment which communicates using Modbus, for example, you will want to choose a DCIM solution which can speak both of these protocols. If you want the DCIM tool to retrieve detailed server information, you will want to choose a DCIM solution which can speak IPMI and other server protocols. Prior to talking to potential DCIM vendors, prepare a list of equipment with which you want to retrieve information.

Similar questions should be asked for each facet of DCIM — asset management, change management, real-time monitoring, workflow, and so on — to form a specific list of DCIM requirements. Prioritize the information you need so you can narrow your focus to those DCIM solutions which address your most important requirements.

http://www.datacenterknowledge.com/archives/2012/05/31/selecting-dcim-tools-f... 

Integration-approach-to-dcim-yields-best-results-image-1

Thursday, May 24, 2012

Why Do I Need #DCIM ?

by Micahel Potts

There are a number of benefits in implementing a Data Center Infrastructure System (DCIM) solution.  To illustrate this point, consider the primary components of data center management.

In the Design phase, DCIM provides key information in designing the proper infrastructure.  Power, cooling and network data at the rack level help to determine the optimum placement of new servers.  Without this information, data center managers have to rely on guesswork to make key decisions on how much equipment can be placed into a rack.  Too little equipment strands valuable data center resources (space, power and cooling).  Too much equipment increases the risk of shutdown due to exceeding the available resources.

In the Operations phase, DCIM can help to enforce standard processes for operating the data center.  These consistent, repeatable processes reduce operator errors which can account for as much as 80% of system outages.

In the Monitoring phase, DCIM provides operational data, including environmental data (temperature, umidity, air flow), power data (at the device, rack, zone and data center level), and cooling data.  In addition, DCIM may also provide IT data such as server resources (CPU, memory, disk, network).  This data can be used to alert management when thresholds are exceeded, reducing the mean time to repair and increasing availability.

In the Predictive Analysis phase, DCIM analyzes the key performance indicators from the monitoring phase as key input into the planning phase. Capacity planning decisions are made based during this phase.  Tracking the usage of key resources over time, for example, can provide valuable input to the decision on when to purchase new power or cooling equipment.

In the Planning phase, DCIM can be used to analyze “what if” scenarios such as server refreshes, impact of virtualization, and equipment moves, adds and changes. If you could summarize DCIM in one word, it would be information.  Every facet of data center management revolves around having complete and accurate information.

DCIM provides the following benefits:

•  Access to accurate, actionable data about the current state and future needs of the data center

•  Standard procedures for equipment changes

•  Single source of truth for asset management

•  Better predictability for space, power and cooling capacity means increased time to plan

•  Enhanced understanding of the present state of the power and cooling infrastructure and environment increases the overall availability of the data center

•  Reduced operating cost from energy usage effectiveness and efficiency

In his report, Datacenter Infrastructure Management Software: Monitoring, Managing and Optimizing the Datacenter, Andy Lawrence summed up the impact of DCIM by saying “We believe it is difficult to achieve the more advanced levels of datacenter maturity, or of datacenter effectiveness generally, without extensive use of DCIM software.”  He went on to add that “The three main drivers of nvestment in DCIM software are economics (mainly through energy-related savings), improved availability, and mproved manageability and flexibility.”

One of the primary benefits of DCIM is the ability to answer questions such as the following:

1. Where is my data center asset located?

2. Where is the best place to place a new server?

3. Do I have sufficient space, power, cooling and network connectivity to provide my needs for the next months?  Next year?  Next five years?

4. An event occurred in the data center — what happened, what services are impacted, where should the technicians go to resolve the issue?

5. Do I have underutilized resources in my data center?

6. Will I have enough power or cooling under fault or maintenance conditions?

Without the information provided by DCIM, the questions become much more difficult to answer.

Friday, May 18, 2012

#Datacenters are becoming software defined

‘Data centers are becoming software defined’ 
Data centers around the world are increasingly being virtualized and organizations are restructuring business processes in line with infrastructure transformations. Raghu Raghuram, SVP & GM, Cloud Infrastructure and Management, VMware tells InformationWeek about the software defined data center, and how virtualized infrastructure will bring in more flexibility and efficiency

 By Brian Pereira, InformationWeek, May 18, 2012

What are the transformations that you observe in the data center? Can you update us on the state of virtualization?

Firstly, there is a transformation from physical to virtual, in all parts of the world. In terms of workloads in the data center, the percent (of workloads) running on virtualized infrastructure as against physical servers has crossed 50 percent. So, there are more applications running in the virtual environment, which means the operating system no longer sees the physical environment. This is a huge change in the data center. The virtual machine has not only become the unit of computing but also the unit of management.

Earlier, operational processes were built around a physical infrastructure but now these are built around virtualized infrastructure. The organization of the data center team is also changing. Earlier, you’d have an OS team, a server team, and teams for network, storage etc. Now virtualization forces all these things to come together. So, it is causing changes not only in the way hardware and software works, but also in the people and processes.

The second change is that data center architecture has gone from vertical to horizontal. You have the hardware and the virtualization layer with applications on top of it. Hence, you can manage the infrastructure separately from managing the applications. You can also take the application from your internal data center and put it on Amazon Web services or the external cloud. Thus, the nature of application management has changed.

How is the management of data center infrastructure changing?

The traditional and physical data center was built on the notions of vertical integration/silos. You’d put agents in the application and at the hardware and operating system levels. And then you’d pull all that information together (from the agents) and create a management console. And when the next application came into the data center you’d create another vertical stack for it, and so on. This led to the proliferation of management tools. And there was a manager of all the management tools. As a result, you were layered on more complexity instead of solving the management problem. The second problem was that operating systems did not have native manageability built into them. So, management was an afterthought. With virtualization, we were the first modern data center platform. We built manageability into the platform. For instance, the VMware distributed resource scheduler (DRS) automatically guarantees resources — you don’t need an external workload manager. We have built in high availability so you don’t need an external clustering manager. Our goal has been to eliminate management, and wherever possible turn it into automation.

We are going from a world of agents and reactive type of management to a world of statistical techniques for managing data. One of our customers has almost 100,000 virtual machines and they generate multiple million metrics every five minutes. There’s a flood of information, so you can’t use the conventional management way of solving a problem. You need to do real-time management and collect metrics all the time. We use statistical learning techniques to understand what’s going on. It is about proactive management and spotting problems before these occur. And this is a feature of VMware vCenter Operations.

What is the Software defined data center all about?

This is a bit forward looking. Increasingly, the data center is being standardized around x86 (architecture). And all the infrastructure functions that used to run on specialized ASICs (Application Specific Integrated Circuits) are now running on standard x86 hardware and being implemented via software as virtual machines. For instance, Cisco and CheckPoint are shipping virtual firewalls; HP is shipping virtual IDS devices; and RiverBed is shipping virtual WAN acceleration (optimization). All of it is becoming virtualized software; now the entire data center is becoming one form factor, on x86. As it is all software, they can be programmed more easily. And hence it can be automated. So, when an application comes into the data center, you automatically provision the infrastructure that it needs. You can grow/shrink that infrastructure. Scale it up or out. And configure policies or move them as the application moves.

All of this constitutes the software defined data center. It’s a new way of automating and providing the infrastructure needed for applications, as the applications themselves scale and react to end users. We see that rapidly emerging.

This is a concept in the large cloud data centers, but we want to bring it to mainstream enterprises and service providers.

VMware is a pioneer for virtualization of servers. But what are you offering for virtualized networking, security and storage?

There are smaller players such as Nicira Networks, which are actively pursuing network virtualization. Last year we announced (in collaboration with Cisco) a technology called VXLAN (Virtual eXtensible LAN). The difference between us and Nicira (network virtualization) is that we are doing it so that it works well with existing networking gear. We are creating a virtual network that is an overlay to the physical network. As the applications are set up, the networking can be done in a virtualized fashion. Load balancing and all the network services can happen in a virtualized fashion, without the need to reconfigure the physical network.

But you also need virtualized security services and virtualized network services for this. We have a line of products called vShield that offers this. It has a load balancer, NAT edge and stateful firewall; application firewall etc. Then, you have the management stack that ties it all together. We call this software defined networking and security. And we are doing the same thing with storage with Virtual Storage Appliance. We also offer storage provisioning automation with vSphere and vCloud Director.

What is the key focus for VMware this year?

Our slogan is business transformation through IT transformation. We want to enable businesses to transform themselves, by transforming IT. We talk about transformations with mobility, new application experiences, and modernizing infrastructure to make it more agile and cost-effective. These are the fundamental transformations that engage us with customers. So, it starts with virtualization but it doesn’t stop there. The faster customers progress through virtualization, the faster they can progress through the remaining stages of the transformation.

Wednesday, May 16, 2012

International #DataCenter Real Estate Overview

International Data Center Real Estate Overview

 May 16, 2012
International Data Center Real Estate Overview

Although the U.S. still remains in some sense the hub of the data center market, other regions around the world are exhibiting their own dynamics, particularly in Asia and Europe. Demand for data center services has yet to plateau, so companies are continually needing to expand their IT capabilities, whether through new data center construction or expansion or through outsourcing to the cloud (meaning another company somewhere must have or add data center capacity). Thus, demand for data center real estate is correspondingly strong—but, naturally, it varies around the globe depending on a variety of factors. The following are some key international areas in the data center real estate market.

Europe

Given its current fiscal straits, Europe exemplifies the present overall strength of the data center market. The continent is currently struggling to resolve its crushing debt load and to determine whether it will continue as a consolidated entity (the EU) or as separate states. A breakup of the EU may well be in the offing, as CNBC reports (“Stocks Post Loss on Greece, S&P at 3-Month Low”): “‘I think people need to prepare for the eventual removal of Greece from the EU and investors are getting ahead of that before they’re forced to,’ said Matthew McCormick, vice president and portfolio manager at Bahl & Gaynor Investment Counsel.” Greece may be the first—but not last—nation to leave or be booted from the union.

But despite these economic and political problems, the data center industry is still seeing growth in this region. In the colocation sector, service provider Interxion reported good news for the first quarter of this year, according to DatacenterDynamics (“Interxion reports strong Q1 results despite Europe’s economy”): Interxion’s CEO, David Ruberg, stated, “Recurring revenue increased by more than 4% over the quarter ended December 31, 2011, and strong bookings in the quarter reflect a continued healthy market for our services, despite sustained economic weakness in Europe.”

Furthermore, even though Europe is in the midst of a financial crisis, possibly spilling over to a political one, portions of it remain relatively low-risk locations for new data centers, according to Cushman & Wakefield and hurleypalmerflatt. The Data Centre Risk Index 2012 ranks the U.K. and Germany as second and third, respectively, for lowest-risk regions to build data centers (behind the U.S.). This report examines risks such as political instability, energy costs, potential for natural disasters and other factors that could endanger a data center operation.

Given the increasing reliance of western economies on IT services provided by data centers, the real estate market will likely withstand minor economic or even political reorganization in Europe. Of course, should the economic problems result in a more serious situation, all bets are off.

Nordic Region

Within Europe, the Nordic countries are a growing market all their own. Offering a cool climate (great for free cooling to reduce energy consumption) and (in some areas) abundant renewable energy, these nations are an increasingly attractive (and, concomitantly, less risky) option for companies looking to build new facilities. On the 2012 Data Centre Risk Index, Iceland ranked an impressive number four (even despite its recent volcanic activity that shut down many European airports); Sweden ranked eighth, followed by Finland at nine and Norway at twelve.

Nevertheless, even though the region has seen expansion of the data center market this year, not everything works in its favor: according to DatacenterDynamics (“Nordics make strong entrance in data center risk index”), “Norway…ranked as the most politically stable country and also measured a high availability of natural resources and renewable energy sources but its high cost of labour and relatively low connectivity pushed it down on the list.” Iceland was cited for political instability and a lack of bandwidth capacity as working against it. Overall, however, the Nordic countries are the rising star of the European region.

Asia

Asia represents the area of greatest expansion in the data center market, as the Data Center Journal reported (“Fastest-Growing Data Center Market in 2012”). In particular, Hong Kong, Shanghai and Singapore demonstrate the strongest growth, but other areas are also growing. Asian nations do not yet match western nations—particularly the U.S.—in overall development, but their large populations (particularly in China and India) and growing demand for IT services are driving demand for data center space. On Cushman & Wakefield and hurleypalmerflatt’s Data Centre Risk Index for 2012, Hong Kong placed highest among Asian regions, ranking seventh. South Korea ranked 13, Thailand 15 and Singapore 17. China and India, despite their growth potential, ranked near the bottom of the list: 26th place for China and 29th for India out of 30 evaluated nations.

Despite the risks, China in particular is seeing growth, partly as companies from other nations (like major corporations in the U.S., including IBM and Google) build facilities in hopes of tapping the emerging markets in the region.

South America

South America is another region with mixed conditions, like Asia. Despite its own significant growth, the region poses many risks to companies building data centers. Brazil, the only South American nation represented in the Data Centre Risk Index scored at the bottom of the heap. DatacenterDynamics (“Report: Brazil is riskiest data center location”) notes that although “the report’s authors based their judgment on more than a dozen parameters, high energy cost and difficulty of doing business stood out as key risk factors in operating data centers in Brazil.” Other risk factors, such as political instability and high corporate taxes, also weighed the nation to the bottom of the rankings. Nevertheless, Brazil will likely lead in growth in this region, according to the Cushman & Wakefield and hurleypalmerflatt report. In addition, Mexico will also see significant growth (the nation only ranked a few slots above Brazil according to risk). Although Mexico is technically not geographically a part of South America, it may be best lumped with that region.

Middle East and Africa

The only country outside the above-mentioned regions that ranks in the Data Centre Risk Index is the Middle Eastern nation of Qatar, which ranked a surprising sixth place, just behind Canada. Needless to say, few nations in this region represent prime data center real estate, owing to political instability, ongoing wars and other factors. Populations in these regions are demanding IT services, and opportunities are available, but pending some relief from strife (particularly in the Middle East, but also in some African nations), growth will be restrained.

Data Center Market Conclusions

Growth in the data center real estate market is still strong in North America, as businesses and consumers continue to demand more and more services. Europe, despite is economic difficulties (and the U.S. isn’t far behind), is nevertheless seeing growth as well. Asia, concomitant with its emerging markets, is the growth leader in the data center sector (meaning certain portions of it—it is a huge region). But these conditions tend to indicate that the data center market overall is simply in its growth stage. Eventually, growth will level out as rising demand meets the ceiling of resource (particularly energy) availability.

Photo courtesy of Martyn Wright

Tuesday, May 15, 2012

#Uptime: Greenpeace wants #Datacenter industry to do more

Analyst says energy efficiency is great but it is not enough

15 May 2012 by Yevgeniy Sverdlik - DatacenterDynamics

 

A Greenpeace analyst commended the data center industry for gains in energy efficiency it had made over the recent years, but said the environmentalist organization wanted the industry to do more.

Uptime: Greenpeace wants data center industry to do more
Gary Cook, senior IT analyst, Greepeace.

“With all respect to the great amount of progress you’ve made in energy efficient design … we’re asking you to do more,” Gary Cook, senior IT analyst at Greenpeace, said during a keynote address at the Uptime Institute’s annual symposium in Santa Clara, California, Monday.

“You have an important role to play in changing our economy,” he said. The world is becoming increasingly reliant on data centers, and both governments and energy companies are working hard to attract them.

Greenpeace wants data center operators to prioritize clean energy sources for their power and to demand cleaner fuel mix from their energy providers.

Citing figures from a report by the non-profit Climate Group, Cook said the IT industry was responsible for about 2% of global carbon emissions. Applying IT could result in a reduction of carbon emissions by 15%, however, the same report concluded.

These applications include examples like telecommuting instead of driving or sending an email instead of delivering a physical letter.

If the data centers the world is already so dependent on and will become more so would run on clean energy, “this could be a huge win,” Cook said. People in this room could be leading the charge in driving the clean-energy economy.”

To help the data center industry identify clean energy sources, Greenpeace is planning to create a Clean Energy Guide for data centers, Cook said. The guide will evaluate renewable energy choices for key data center regions.

In April, Greenpeace released a report titled “How clean is my cloud”, where it ranked 15 companies based on their energy policies. Rating categories included the amount of coal and nuclear energy they used, the level of transparency about their energy use, infrastructure-siting policy, energy efficiency and greenhouse-gas mitigation and the use of renewable energy and clean-energy advocacy.

This was the second such report the organization had put out.

Of the 15 well-known companies, Amazon, Apple and Microsoft were identified as companies relying on dirty fuel. Google, Facebook and Yahoo! received more positive reviews from Greenpeace.

Response from the industry was mixed. Companies that received high marks were proud of the achievement and companies that did not either declined to comment or questioned accuracy of the calculcations Greenpeace used to arrive at its conclusions.

Cook mentioned Facebook during his keynote at the symposium, saying the company had improved in the environmentalist organization’s eyes. While its Oregon and North Carolina data centers still rely heavily on coal energy, the company’s choice to locate its newest data center in Sweden, where the energy mix is relatively clean, was a turn in the right direction.

In a statement issued in December 2011, Facebook announced a commitment to eventually power all of its operations with clean and renewable energy. Cook said the decision to build in Sweden was evidence that the company’s commitment was real.

Thursday, May 10, 2012

Hydrogen-Powered #DataCenters ?

by Jeff Clark

 


Hydrogen-Powered Data Centers?

Although hydrogen generally doesn’t come up in a discussion of alternative energy sources, it is a topic relevant to cleaner energy use. So, what’s the difference, and what is hydrogen’s potential role in the data center? Apple, for instance—in addition to building a 20 megawatt solar farm—is also planning a large hydrogen fuel cell project at its Maiden, North Carolina, facility. Can hydrogen sate the data center industry’s ever growing power appetite?

Hydrogen: A Storage Medium

To get a good idea of the basic properties of hydrogen, just think of the Hindenburg: the giant German airship (dirigible) that plunged to the Earth in flames in 1937. The airship gained its lift from hydrogen gas: a very light (i.e., not dense), inflammable gas. Although hydrogen is plentiful (think water, hydrocarbons and so on), it is seldom found in its diatomic elemental form (H2). So, unlike coal, for example, hydrogen is not a readily obtainable fuel source. It can, however, be used as a means of storing or transporting energy—and this is its primary use. As a DatacenterDynamics interview with Siemens (“Using hydrogen to store energy”) notes, “Hydrogen is a multi-purpose energy carrier… Also, hydrogen as a storage medium is a concept that has already been tested in several domains.”

Hydrogen is thus in some ways like gasoline: it is simply a chemical that certain types of equipment can convert into energy and various byproducts. But it’s the nature of these byproducts that makes hydrogen appealing.

Clean Energy

Under ideal conditions, the burning of hydrogen (think Hindenburg) produces water and heat as its only products, making its use in internal combustion engines preferable (in this sense, at least) to fossil fuels. But even more useful would be a process that converts hydrogen more directly into energy and water—enter the fuel cell. A fuel cell splits hydrogen into protons and electrons, creating an electrical current. The protons combine with oxygen in a catalytic environment to yield water. What more could a data center ask for? (For a simple animation depicting the operating principles of a fuel cell, see the YouTube video Hydrogen Fuel Cell.)

The fuel cell produces electricity as long as hydrogen fuel is supplied to it. Its characteristics from a physical standpoint are nearly ideal: electricity on demand with virtually no production of carbon compounds or other emissions. Because hydrogen can be stored, it represents energy that can be consumed as needed, not necessarily right away (as in the case of solar or wind power). Sounds great—but as always, there are some caveats.

Getting Your Hands on Hydrogen

As mentioned above, hydrogen does not exist naturally in a manner that makes it readily available as a fuel. Practically speaking, it must be produced from other materials, such as water or fossil fuels. The two main processes are electrolysis of water, whereby an electric current splits water molecules into elemental oxygen (O2) and hydrogen (H2), and steam reforming of hydrocarbons. In each case, energy input of some kind is required, either in the form of electrical energy to electrolyze water or in the form of stored chemical energy in the form of a hydrocarbon. Electrolysis is one means of storing energy from renewable resources like solar or wind, avoiding entirely the need for mined resources like natural gas or coal. Naturally, the efficiency of these processes varies depending on the particulars of the process, the equipment used and so forth.

Alternative processes for generating hydrogen are under investigation—such as biomass production—but these processes do not yet generate hydrogen practically on large scales. Whatever the generation approach, however, the gas must then be stored for transport or for later use.

Storing Hydrogen—A Slight Problem

Hydrogen is an inflammable gas (again, think Hindenburg), but it is not necessarily more dangerous than, say, gasoline vapors. The main problem with storing hydrogen is that compared with other fuels—such as gasoline—it contains much less energy per unit volume (even though it contains more energy per unit mass). Practical (in terms of size) storage requires that the hydrogen be compressed, preferably into liquid form for maximum density. And herein lies the main difference relative to liquid fossil fuels: the container not only holds an inflammable material, but it is also pressurized, creating its own unique challenges and dangers. Fuel leakage into the atmosphere is more problematic, and some environmentalists even claim that this leakage, were hydrogen used on a large scale, could have harmful repercussions on the environment.

Even in liquid form, hydrogen still lags other fuels in energy stored per unit volume. Thus, when implemented in automobiles, for instance, fuel-cell-powered automobiles lack the range of conventional gasoline-powered vehicles. And then there’s the cost of fuel cell technology, which is currently prohibitive. Claims of falling fuel cell prices are dubious, given the unsustainable subsidies from the federal government and some states (like California).

Hydrogen for Data Centers

Apple’s Maiden data center is the highest-profile facility implementing fuel cell technology. According to NewsObserver.com (“Apple plans nation’s biggest private fuel cell energy project at N.C. data center”), Apple will generate hydrogen from natural gas and will employ 24 fuel cell modules. The project is slated for an output of 4.8 megawatts—much less than the data center’s total power consumption, but still a sizable output.

The use of natural gas to generate hydrogen still creates carbon emissions, so this project won’t satisfy everyone (although whether carbon dioxide is as bad as its current politicized reputation would suggest is hardly certain). Nevertheless, like Apple’s large solar farm at the same site, this hydrogen fuel cell project will be a good test of the practicability of hydrogen in the context of data centers.

Hydrogen: Will Most Companies Care?

Jumping into the power generation arena is not something most companies (particularly small and midsize companies) can afford to do—let alone have an interest in doing. So, pending availability of some affordable, prepackaged hydrogen fuel cell system, don’t expect most companies to deploy such a project at their data center sites. Currently, large companies like Apple and Google are among the few dabbling in energy in addition to their primary business. Most companies will, no doubt, prefer to simply plug their data centers into the local utility and let someone else worry about where the energy comes from—these companies wish to focus on their primary business interests.

Conclusion: What Exactly Does Hydrogen Mean for the Data Center?

Hydrogen fuel cells offer some major improvements in controlling emissions, and hydrogen delivers some benefits as a means of storing and transporting energy. But fuel cell technology lacks the economic efficiency of other, traditional power sources, so it has a ways to go before it can attain the status of coal or nuclear, or even smaller-scale sources like solar. Furthermore, the applicability of hydrogen (as such) to data centers is unclear. Power backup seems the most likely present candidate for application of hydrogen and fuel cells. In time, Apple’s project may demonstrate the practicality of electricity via natural gas as another possibility. Until then, however, the industry must wait to see whether this technology matures—and becomes economically feasible.

Photo courtesy of Zero Emission Resource Organisation

Wednesday, May 9, 2012

Open #DataCenter Alliance Announces that UBS Chief Technology Officer Andy Brown will Keynote at Forecast 2012

Rackspace CTO John Engates to Deliver Openstack Industry Perspective, New Big Data Panel Joins Forecast 2012 Agenda

 

 

 

 

 

PORTLAND, Ore., May 09, 2012 (BUSINESS WIRE) -- The Open Data Center Alliance (ODCA) today announced that UBS Chief Technology Officer Andy Brown will be a keynote speaker at the Open Data Center Alliance Forecast 2012 event. Brown plans to address enterprise requirements for the cloud and comment on the progress of industry delivery of solutions based on the ODCA usage models. In his role as chief technology officer, Brown is responsible for advancing the investment bank's group architecture, simplifying the application and infrastructure landscape, and improving the quality of UBS's technical solutions.

In related news, the organization also announced the addition of Rackspace Chief Technology Officer, John Engates, to the event's agenda with the delivery of an Industry Perspective session on the role of industry standard delivery of cloud solutions. Engates will focus his discussion on the role of Openstack in cloud solution delivery and its alignment with the objectives of open solutions required by ODCA. A Big Data panel was also added to the agenda featuring panelists from Cloudera, Intel, SAS and Teradata following last week's announcement of a new ODCA Data Services workgroup. The organization also announced a host of executive IT experts to be featured in panels at the event. Held on June 12, in conjunction with the 10th International Cloud Expo in New York City, ODCA Forecast 2012 will bring together hundreds of members of the Alliance, industry experts and leading technology companies to showcase how ODCA usage model adoption can accelerate the value that cloud computing represents to organizations through increased efficiency and agility of IT services.

"The Forecast agenda features a who's who of enterprise IT leaders, all of whom are assembling to share their best insights in deploying cloud services," said Marvin Wheeler, ODCA chair. "Adding CTOs on the caliber of Andy Brown and John Engates to our agenda underscores the high regard that both the organization and our first event are generating. For organizations considering cloud deployments in 2012, this is a rare opportunity to learn from their peers and see the latest in solutions advancements."

ODCA Forecast 2012 will feature sessions on top issues associated with cloud deployment including security, service transparency, and industry standard delivery of solutions. The Big Data panel complements planned panels featuring the first public discussions of charter and progress of the organization's recently formed Data Services workgroup. With leading experts from enterprise IT, service providers and the data center industry on hand to discuss the top issues and opportunities offered by cloud computing, attendees will have a rare opportunity to network with leading thinkers and gain critical knowledge to help shape their own cloud deployments. Alliance solutions provider members will also showcase products that have been developed within the guidelines of Alliance usage models.

Leading managers from some of the largest global IT shops have formed the group of panelists for the event, and today the Alliance is announcing several new panelists who will share their expertise across areas impacting the cloud, including security, management and regulation. The Cloud Security Panel will now feature Dov Yoran, a founding member of the Cloud Security Alliance. Ray Solnik, president of Appnomic Systems, a leading provider of automated Cloud IT performance management solutions has joined the Cloud Management Panel. The Cloud Regulation Panel is pleased to welcome Gordon Haff, senior cloud strategy marketing and evangelism manager with Red Hat. Haff will also be part of the Cloud Software Panel. Jeff Deacon, chief cloud strategist with Terramark, will be part of the Service Provider Panel. The Cloud Hardware Panel will feature John Igoe, executive director, development engineering for Dell. Other new panelists could be found at www.opendatacenteralliance.org/forecast2012 .

Forecast 2012 is supported by the following sponsors: Gold Sponsors Dell, Hewlett Packard and Intel Corp, silver sponsor Red Hat, Pavilion Sponsor Citrix and McAfee and breakfast sponsor Champion Solutions Group. Media and collaborating organization sponsors include Cloud Computing Magazine, Cloud Security Alliance, CloudTimes, the Distributed Management Task Force (DMTF), the Green Grid, InformationWeek, Open Compute Project, Organization for the Advancement of Structured Information Standards (OASIS), SecurityStockWatch.com and Tabor Communications.

All Forecast attendees will also receive a complimentary pass to International Cloud Expo as part of their ODCA Forecast 2012 registration representing a tremendous value for Forecast attendees. For more information on the Alliance, or to register for ODCA Forecast 2012, please visit www.opendatacenteralliance.org .

About The Open Data Center Alliance

The Open Data Center Alliancea" is an independent IT consortium comprised of global IT leaders who have come together to provide a unified customer vision for long-term data center requirements. The Alliance is led by a twelve member Board of Directors which includes IT leaders BMW, Capgemini, China Life, China Unicom, Deutsche Bank, JPMorgan Chase, Lockheed Martin, Marriott International, Inc., National Australia Bank, Terremark, Disney Technology Solutions and Services and UBS. Intel serves as technical advisor to the Alliance.

In support of its mission, the Alliance has delivered the first customer requirements for cloud computing documented in eight Open Data Center Usage Models which identify member prioritized requirements to resolve the most pressing challenges facing cloud adoption. Find out more at www.opendatacenteralliance.org .

SOURCE: The Open Data Center Alliance

Tuesday, May 8, 2012

Operations-as-a-Service (or IaaS + PaaS + SMEs)

Guest Post from Richard Donaldson

 

I’d been holding out a bit on writing this as it really is a synthesis of ideas (aren’t they all) with special mention of dialogue with Jeffrey Papen of Peak Hosting (www.peakwebhosting.com)…

I’ve been collaborating and speaking extensively with Jeffrey on the next phase of “hosting” since we are now moving beyond the hype cycle of “Cloud Computing” (see previous post on “The end of the Cloud Era”).  The community at large (and people in general) love the idea of simple, bite sized “solutions” with pithy and “sexy” naming conventions (think <30sec sound bites) and that was the promise/expectation around “the cloud” as it was popularized – a magic all in one solution whereby you just add applications and the “cloud” will do the rest. Yet, the promise never quite met expectations as the “cloud” really ended up being an open standards evolution of “virtualization” – nothing wrong with that, just not the “all in one” solution that people really wanted the cloud to be (ps – all in one refers to the aforementioned of applications just being pushed thru APIs to the “cloud” and the “cloud” manages all underlying resources).

So, as the Cloud Hype dissipates (love the metaphor), we are sorta back to the same basic elements that make up Infrastructure – Datacetners, Compute (IT), Communications (switches/routers), Software that manages it all (virtualization, cloud, etc), all accessible thru the to be built APIs.  Put another way, we are coming full circle and back to centralized, on-demand computing that needs one more element to make it all work – Subject Matter Experts (SMEs).

I was inspired to write this today when I saw this post from Hitachi: http://www.computerworld.com/s/article/9226920/Hitachi_launches_all_in_one_data_center_service - “Japanese conglomerate Hitachi on Monday launched a new data center business that includes everything from planning to construction to IT support.  Hitachi said its new “GNEXT Facility & IT Management Service” will cover consulting on environmental and security issues, procurement and installation of power, cooling and security systems, and ongoing hardware maintenance. It will expand to include outsourcing services for software engineers and support for clearing regulatory hurdles and certifications.”  This is the comprehensive “build to suit” solutions the market has been seeking since the cloud – it includes everything to get your infrastructure building blocks right and is provided as a service – but what do we call this service????

How about “Operations-as-a-Service“!!

Image

OaaS pulls together the elements in IaaS + PaaS + SMEs.  It outsources the “plumbing” to those that can make it far more cost effective thru economies of scale.  Sure, there are a select few companies who will do this all in house: Google, eBay, Microsoft, Amazon, Apple (trying), and of course, Zynga.  Yet, these companies are at such massive scale that it makes sense – and yet, they even have excess (at least they should) capacity which is why AWS was born in the first place and we are now seeing Zynga open up to allow gamers to use their platform (see: http://www.pocketgamer.biz/r/PG.Biz/Zynga+news/news.asp?c=38455).  Yet these are the exceptions and not the rule.

The rest of the world should and is seeking comprehensive, end-to-end Operations as a Service provided by single vendors.  It doesn’t preclude the market place from buying discreet parts of OaaS individually, however, the dominant companies that will begin to emerge in this next decade will seek to add more and more of the OaaS solutions set to their product list thereby catalyzing a lot (I mean a lot) of consolidation.

I will be following up this blog with a more detailed look at how this concept is playing out, yet in the mean time would very much like to hear the feed back on this topic – is the world looking for OaaS?

rd

Original Post: http://rhdonaldson.wordpress.com/2012/05/07/operations-as-a-service-or-iaas-p...

Friday, May 4, 2012

Patent Wars may Chill #DataCenter Innovation

May 4, 2012 by mmanos Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that : In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers. While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure.  With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water.  In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?   After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.   We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable? Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

Thursday, May 3, 2012

It’s not easy being green: Data center edition

By 

Facebook’s Prineville data center.

Building sustainable data centers is hard — especially if you’re trying to do it in office space in Houston. Plus, the idea of operating some kind of power-generation plant for offering renewable energy such as solar or biogas is a scary prospect for data center operators. These were among the key takeaways (along with a few less-obvious lessons) from a panel on sustainable data centers at the Open Compute Summit held today in San Antonio, Texas.

Bill Weihl, manager of energy efficiency and sustainability at Facebook and the former energy czar at Google, moderated the panel that also featured Melissa Gray, the head of sustainability for Rackspace; Stefan Garrard who is building an HPC cluster for oil company BP; Winston Saunders from Intel; and Jonathan Koomey, a consultant and energy-efficiency expert. While we are entering the age of 100-megawatt data centers the size of football fields, we’re also dealing with higher energy costs and concerns about how to keep our webscale infrastructure running. As part of the focus on lowering costs, The Open Compute Project spends a lot of time on sustainability.

But lowering the energy inside a data center can only go so far. Saunders explained that chips for example, had achieved their lowest possible power utilization without new breakthroughs. Even when idle, the chips still consumer 20 percent of their maximum energy draw because they can’t fully turn themselves off. The inability to power all the way down is a function of adding latency (once something is turned off it takes time to turn it back on) and because powering down the chip requires the data center to stop sending information. However, data centers rarely hit that point, which means chips are always “awake” and consuming energy.

But it’s not just the hardware. Garrard said his current high-performance computing cluster is running in office space that holds both humans ans servers. He’s done a little to help make things more efficient, but because of the office location and Houston’s hot and humid climate, his servers run at a power usage effectiveness of more than 2 (Facebook, which has heavily optimized its PUE is about 1.07; 1.0 is ideal). So, he is building out a new facility and hopes to get closer to a PUE of 1.5.

But where will the power for his and other new data centers come from? Renewables aren’t really on the list yet. When asked about using biogas systems such as those from Bloom Energy or solar, Gray said the idea of running a generation plant along with a data center was so far outside her core competency that it wasn’t really something she thought about.

Koomey, however, called the idea that a data center operator has to follow in Apple’s footsteps to operate their own generation (Apple is using Bloom’s boxes to power part of its new data center) a “canard” and said data center operators should get renewable power from their utilities. Weihl, who helped Google buy wind power from providers for its data centers, agreed.

The panel essentially outlined several areas where data center infrastructure consumes energy. In the ideal world, operators could site their data centers in places that are cool and dry, and build out the ideal facility and hardware to reduce the power draw. As Koomey said, they could think “holistically.”

Unfortunately, most data centers are built in the real world, where and when they are needed with the equipment available at the time. The standards and designs offered by the Open Compute Project will help, but the real world will take its toll.