Showing posts with label datacenters. Show all posts
Showing posts with label datacenters. Show all posts

Sunday, April 7, 2013

Where is the open #datacenter facility API ?


For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX
Treat the DC infrastructure as an IT system;
- Converge in the infrastructure instrumentation and control systems
- Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components
With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.
This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.
So if we have the ‘integrated control systems’, what would we be able to do?
We could:
  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.
Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.
This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)
I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.
So what does this API need to be ?
First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.
This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.
A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.
Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.
These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…
(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)
It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.
All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…
So what information should this API be able to feed ?
Most information would be nice to have in near real time :
  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))
(all if and where applicable and available)
The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.
So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?
More:

Original Article: http://datacenterpulse.org/blogs/jan.wiersma/where_open_datacenter_facility_api

Friday, May 18, 2012

#Datacenters are becoming software defined

‘Data centers are becoming software defined’ 
Data centers around the world are increasingly being virtualized and organizations are restructuring business processes in line with infrastructure transformations. Raghu Raghuram, SVP & GM, Cloud Infrastructure and Management, VMware tells InformationWeek about the software defined data center, and how virtualized infrastructure will bring in more flexibility and efficiency

 By Brian Pereira, InformationWeek, May 18, 2012

What are the transformations that you observe in the data center? Can you update us on the state of virtualization?

Firstly, there is a transformation from physical to virtual, in all parts of the world. In terms of workloads in the data center, the percent (of workloads) running on virtualized infrastructure as against physical servers has crossed 50 percent. So, there are more applications running in the virtual environment, which means the operating system no longer sees the physical environment. This is a huge change in the data center. The virtual machine has not only become the unit of computing but also the unit of management.

Earlier, operational processes were built around a physical infrastructure but now these are built around virtualized infrastructure. The organization of the data center team is also changing. Earlier, you’d have an OS team, a server team, and teams for network, storage etc. Now virtualization forces all these things to come together. So, it is causing changes not only in the way hardware and software works, but also in the people and processes.

The second change is that data center architecture has gone from vertical to horizontal. You have the hardware and the virtualization layer with applications on top of it. Hence, you can manage the infrastructure separately from managing the applications. You can also take the application from your internal data center and put it on Amazon Web services or the external cloud. Thus, the nature of application management has changed.

How is the management of data center infrastructure changing?

The traditional and physical data center was built on the notions of vertical integration/silos. You’d put agents in the application and at the hardware and operating system levels. And then you’d pull all that information together (from the agents) and create a management console. And when the next application came into the data center you’d create another vertical stack for it, and so on. This led to the proliferation of management tools. And there was a manager of all the management tools. As a result, you were layered on more complexity instead of solving the management problem. The second problem was that operating systems did not have native manageability built into them. So, management was an afterthought. With virtualization, we were the first modern data center platform. We built manageability into the platform. For instance, the VMware distributed resource scheduler (DRS) automatically guarantees resources — you don’t need an external workload manager. We have built in high availability so you don’t need an external clustering manager. Our goal has been to eliminate management, and wherever possible turn it into automation.

We are going from a world of agents and reactive type of management to a world of statistical techniques for managing data. One of our customers has almost 100,000 virtual machines and they generate multiple million metrics every five minutes. There’s a flood of information, so you can’t use the conventional management way of solving a problem. You need to do real-time management and collect metrics all the time. We use statistical learning techniques to understand what’s going on. It is about proactive management and spotting problems before these occur. And this is a feature of VMware vCenter Operations.

What is the Software defined data center all about?

This is a bit forward looking. Increasingly, the data center is being standardized around x86 (architecture). And all the infrastructure functions that used to run on specialized ASICs (Application Specific Integrated Circuits) are now running on standard x86 hardware and being implemented via software as virtual machines. For instance, Cisco and CheckPoint are shipping virtual firewalls; HP is shipping virtual IDS devices; and RiverBed is shipping virtual WAN acceleration (optimization). All of it is becoming virtualized software; now the entire data center is becoming one form factor, on x86. As it is all software, they can be programmed more easily. And hence it can be automated. So, when an application comes into the data center, you automatically provision the infrastructure that it needs. You can grow/shrink that infrastructure. Scale it up or out. And configure policies or move them as the application moves.

All of this constitutes the software defined data center. It’s a new way of automating and providing the infrastructure needed for applications, as the applications themselves scale and react to end users. We see that rapidly emerging.

This is a concept in the large cloud data centers, but we want to bring it to mainstream enterprises and service providers.

VMware is a pioneer for virtualization of servers. But what are you offering for virtualized networking, security and storage?

There are smaller players such as Nicira Networks, which are actively pursuing network virtualization. Last year we announced (in collaboration with Cisco) a technology called VXLAN (Virtual eXtensible LAN). The difference between us and Nicira (network virtualization) is that we are doing it so that it works well with existing networking gear. We are creating a virtual network that is an overlay to the physical network. As the applications are set up, the networking can be done in a virtualized fashion. Load balancing and all the network services can happen in a virtualized fashion, without the need to reconfigure the physical network.

But you also need virtualized security services and virtualized network services for this. We have a line of products called vShield that offers this. It has a load balancer, NAT edge and stateful firewall; application firewall etc. Then, you have the management stack that ties it all together. We call this software defined networking and security. And we are doing the same thing with storage with Virtual Storage Appliance. We also offer storage provisioning automation with vSphere and vCloud Director.

What is the key focus for VMware this year?

Our slogan is business transformation through IT transformation. We want to enable businesses to transform themselves, by transforming IT. We talk about transformations with mobility, new application experiences, and modernizing infrastructure to make it more agile and cost-effective. These are the fundamental transformations that engage us with customers. So, it starts with virtualization but it doesn’t stop there. The faster customers progress through virtualization, the faster they can progress through the remaining stages of the transformation.

Wednesday, May 16, 2012

International #DataCenter Real Estate Overview

International Data Center Real Estate Overview

 May 16, 2012
International Data Center Real Estate Overview

Although the U.S. still remains in some sense the hub of the data center market, other regions around the world are exhibiting their own dynamics, particularly in Asia and Europe. Demand for data center services has yet to plateau, so companies are continually needing to expand their IT capabilities, whether through new data center construction or expansion or through outsourcing to the cloud (meaning another company somewhere must have or add data center capacity). Thus, demand for data center real estate is correspondingly strong—but, naturally, it varies around the globe depending on a variety of factors. The following are some key international areas in the data center real estate market.

Europe

Given its current fiscal straits, Europe exemplifies the present overall strength of the data center market. The continent is currently struggling to resolve its crushing debt load and to determine whether it will continue as a consolidated entity (the EU) or as separate states. A breakup of the EU may well be in the offing, as CNBC reports (“Stocks Post Loss on Greece, S&P at 3-Month Low”): “‘I think people need to prepare for the eventual removal of Greece from the EU and investors are getting ahead of that before they’re forced to,’ said Matthew McCormick, vice president and portfolio manager at Bahl & Gaynor Investment Counsel.” Greece may be the first—but not last—nation to leave or be booted from the union.

But despite these economic and political problems, the data center industry is still seeing growth in this region. In the colocation sector, service provider Interxion reported good news for the first quarter of this year, according to DatacenterDynamics (“Interxion reports strong Q1 results despite Europe’s economy”): Interxion’s CEO, David Ruberg, stated, “Recurring revenue increased by more than 4% over the quarter ended December 31, 2011, and strong bookings in the quarter reflect a continued healthy market for our services, despite sustained economic weakness in Europe.”

Furthermore, even though Europe is in the midst of a financial crisis, possibly spilling over to a political one, portions of it remain relatively low-risk locations for new data centers, according to Cushman & Wakefield and hurleypalmerflatt. The Data Centre Risk Index 2012 ranks the U.K. and Germany as second and third, respectively, for lowest-risk regions to build data centers (behind the U.S.). This report examines risks such as political instability, energy costs, potential for natural disasters and other factors that could endanger a data center operation.

Given the increasing reliance of western economies on IT services provided by data centers, the real estate market will likely withstand minor economic or even political reorganization in Europe. Of course, should the economic problems result in a more serious situation, all bets are off.

Nordic Region

Within Europe, the Nordic countries are a growing market all their own. Offering a cool climate (great for free cooling to reduce energy consumption) and (in some areas) abundant renewable energy, these nations are an increasingly attractive (and, concomitantly, less risky) option for companies looking to build new facilities. On the 2012 Data Centre Risk Index, Iceland ranked an impressive number four (even despite its recent volcanic activity that shut down many European airports); Sweden ranked eighth, followed by Finland at nine and Norway at twelve.

Nevertheless, even though the region has seen expansion of the data center market this year, not everything works in its favor: according to DatacenterDynamics (“Nordics make strong entrance in data center risk index”), “Norway…ranked as the most politically stable country and also measured a high availability of natural resources and renewable energy sources but its high cost of labour and relatively low connectivity pushed it down on the list.” Iceland was cited for political instability and a lack of bandwidth capacity as working against it. Overall, however, the Nordic countries are the rising star of the European region.

Asia

Asia represents the area of greatest expansion in the data center market, as the Data Center Journal reported (“Fastest-Growing Data Center Market in 2012”). In particular, Hong Kong, Shanghai and Singapore demonstrate the strongest growth, but other areas are also growing. Asian nations do not yet match western nations—particularly the U.S.—in overall development, but their large populations (particularly in China and India) and growing demand for IT services are driving demand for data center space. On Cushman & Wakefield and hurleypalmerflatt’s Data Centre Risk Index for 2012, Hong Kong placed highest among Asian regions, ranking seventh. South Korea ranked 13, Thailand 15 and Singapore 17. China and India, despite their growth potential, ranked near the bottom of the list: 26th place for China and 29th for India out of 30 evaluated nations.

Despite the risks, China in particular is seeing growth, partly as companies from other nations (like major corporations in the U.S., including IBM and Google) build facilities in hopes of tapping the emerging markets in the region.

South America

South America is another region with mixed conditions, like Asia. Despite its own significant growth, the region poses many risks to companies building data centers. Brazil, the only South American nation represented in the Data Centre Risk Index scored at the bottom of the heap. DatacenterDynamics (“Report: Brazil is riskiest data center location”) notes that although “the report’s authors based their judgment on more than a dozen parameters, high energy cost and difficulty of doing business stood out as key risk factors in operating data centers in Brazil.” Other risk factors, such as political instability and high corporate taxes, also weighed the nation to the bottom of the rankings. Nevertheless, Brazil will likely lead in growth in this region, according to the Cushman & Wakefield and hurleypalmerflatt report. In addition, Mexico will also see significant growth (the nation only ranked a few slots above Brazil according to risk). Although Mexico is technically not geographically a part of South America, it may be best lumped with that region.

Middle East and Africa

The only country outside the above-mentioned regions that ranks in the Data Centre Risk Index is the Middle Eastern nation of Qatar, which ranked a surprising sixth place, just behind Canada. Needless to say, few nations in this region represent prime data center real estate, owing to political instability, ongoing wars and other factors. Populations in these regions are demanding IT services, and opportunities are available, but pending some relief from strife (particularly in the Middle East, but also in some African nations), growth will be restrained.

Data Center Market Conclusions

Growth in the data center real estate market is still strong in North America, as businesses and consumers continue to demand more and more services. Europe, despite is economic difficulties (and the U.S. isn’t far behind), is nevertheless seeing growth as well. Asia, concomitant with its emerging markets, is the growth leader in the data center sector (meaning certain portions of it—it is a huge region). But these conditions tend to indicate that the data center market overall is simply in its growth stage. Eventually, growth will level out as rising demand meets the ceiling of resource (particularly energy) availability.

Photo courtesy of Martyn Wright

Tuesday, May 15, 2012

#Uptime: Greenpeace wants #Datacenter industry to do more

Analyst says energy efficiency is great but it is not enough

15 May 2012 by Yevgeniy Sverdlik - DatacenterDynamics

 

A Greenpeace analyst commended the data center industry for gains in energy efficiency it had made over the recent years, but said the environmentalist organization wanted the industry to do more.

Uptime: Greenpeace wants data center industry to do more
Gary Cook, senior IT analyst, Greepeace.

“With all respect to the great amount of progress you’ve made in energy efficient design … we’re asking you to do more,” Gary Cook, senior IT analyst at Greenpeace, said during a keynote address at the Uptime Institute’s annual symposium in Santa Clara, California, Monday.

“You have an important role to play in changing our economy,” he said. The world is becoming increasingly reliant on data centers, and both governments and energy companies are working hard to attract them.

Greenpeace wants data center operators to prioritize clean energy sources for their power and to demand cleaner fuel mix from their energy providers.

Citing figures from a report by the non-profit Climate Group, Cook said the IT industry was responsible for about 2% of global carbon emissions. Applying IT could result in a reduction of carbon emissions by 15%, however, the same report concluded.

These applications include examples like telecommuting instead of driving or sending an email instead of delivering a physical letter.

If the data centers the world is already so dependent on and will become more so would run on clean energy, “this could be a huge win,” Cook said. People in this room could be leading the charge in driving the clean-energy economy.”

To help the data center industry identify clean energy sources, Greenpeace is planning to create a Clean Energy Guide for data centers, Cook said. The guide will evaluate renewable energy choices for key data center regions.

In April, Greenpeace released a report titled “How clean is my cloud”, where it ranked 15 companies based on their energy policies. Rating categories included the amount of coal and nuclear energy they used, the level of transparency about their energy use, infrastructure-siting policy, energy efficiency and greenhouse-gas mitigation and the use of renewable energy and clean-energy advocacy.

This was the second such report the organization had put out.

Of the 15 well-known companies, Amazon, Apple and Microsoft were identified as companies relying on dirty fuel. Google, Facebook and Yahoo! received more positive reviews from Greenpeace.

Response from the industry was mixed. Companies that received high marks were proud of the achievement and companies that did not either declined to comment or questioned accuracy of the calculcations Greenpeace used to arrive at its conclusions.

Cook mentioned Facebook during his keynote at the symposium, saying the company had improved in the environmentalist organization’s eyes. While its Oregon and North Carolina data centers still rely heavily on coal energy, the company’s choice to locate its newest data center in Sweden, where the energy mix is relatively clean, was a turn in the right direction.

In a statement issued in December 2011, Facebook announced a commitment to eventually power all of its operations with clean and renewable energy. Cook said the decision to build in Sweden was evidence that the company’s commitment was real.

Thursday, May 10, 2012

Hydrogen-Powered #DataCenters ?

by Jeff Clark

 


Hydrogen-Powered Data Centers?

Although hydrogen generally doesn’t come up in a discussion of alternative energy sources, it is a topic relevant to cleaner energy use. So, what’s the difference, and what is hydrogen’s potential role in the data center? Apple, for instance—in addition to building a 20 megawatt solar farm—is also planning a large hydrogen fuel cell project at its Maiden, North Carolina, facility. Can hydrogen sate the data center industry’s ever growing power appetite?

Hydrogen: A Storage Medium

To get a good idea of the basic properties of hydrogen, just think of the Hindenburg: the giant German airship (dirigible) that plunged to the Earth in flames in 1937. The airship gained its lift from hydrogen gas: a very light (i.e., not dense), inflammable gas. Although hydrogen is plentiful (think water, hydrocarbons and so on), it is seldom found in its diatomic elemental form (H2). So, unlike coal, for example, hydrogen is not a readily obtainable fuel source. It can, however, be used as a means of storing or transporting energy—and this is its primary use. As a DatacenterDynamics interview with Siemens (“Using hydrogen to store energy”) notes, “Hydrogen is a multi-purpose energy carrier… Also, hydrogen as a storage medium is a concept that has already been tested in several domains.”

Hydrogen is thus in some ways like gasoline: it is simply a chemical that certain types of equipment can convert into energy and various byproducts. But it’s the nature of these byproducts that makes hydrogen appealing.

Clean Energy

Under ideal conditions, the burning of hydrogen (think Hindenburg) produces water and heat as its only products, making its use in internal combustion engines preferable (in this sense, at least) to fossil fuels. But even more useful would be a process that converts hydrogen more directly into energy and water—enter the fuel cell. A fuel cell splits hydrogen into protons and electrons, creating an electrical current. The protons combine with oxygen in a catalytic environment to yield water. What more could a data center ask for? (For a simple animation depicting the operating principles of a fuel cell, see the YouTube video Hydrogen Fuel Cell.)

The fuel cell produces electricity as long as hydrogen fuel is supplied to it. Its characteristics from a physical standpoint are nearly ideal: electricity on demand with virtually no production of carbon compounds or other emissions. Because hydrogen can be stored, it represents energy that can be consumed as needed, not necessarily right away (as in the case of solar or wind power). Sounds great—but as always, there are some caveats.

Getting Your Hands on Hydrogen

As mentioned above, hydrogen does not exist naturally in a manner that makes it readily available as a fuel. Practically speaking, it must be produced from other materials, such as water or fossil fuels. The two main processes are electrolysis of water, whereby an electric current splits water molecules into elemental oxygen (O2) and hydrogen (H2), and steam reforming of hydrocarbons. In each case, energy input of some kind is required, either in the form of electrical energy to electrolyze water or in the form of stored chemical energy in the form of a hydrocarbon. Electrolysis is one means of storing energy from renewable resources like solar or wind, avoiding entirely the need for mined resources like natural gas or coal. Naturally, the efficiency of these processes varies depending on the particulars of the process, the equipment used and so forth.

Alternative processes for generating hydrogen are under investigation—such as biomass production—but these processes do not yet generate hydrogen practically on large scales. Whatever the generation approach, however, the gas must then be stored for transport or for later use.

Storing Hydrogen—A Slight Problem

Hydrogen is an inflammable gas (again, think Hindenburg), but it is not necessarily more dangerous than, say, gasoline vapors. The main problem with storing hydrogen is that compared with other fuels—such as gasoline—it contains much less energy per unit volume (even though it contains more energy per unit mass). Practical (in terms of size) storage requires that the hydrogen be compressed, preferably into liquid form for maximum density. And herein lies the main difference relative to liquid fossil fuels: the container not only holds an inflammable material, but it is also pressurized, creating its own unique challenges and dangers. Fuel leakage into the atmosphere is more problematic, and some environmentalists even claim that this leakage, were hydrogen used on a large scale, could have harmful repercussions on the environment.

Even in liquid form, hydrogen still lags other fuels in energy stored per unit volume. Thus, when implemented in automobiles, for instance, fuel-cell-powered automobiles lack the range of conventional gasoline-powered vehicles. And then there’s the cost of fuel cell technology, which is currently prohibitive. Claims of falling fuel cell prices are dubious, given the unsustainable subsidies from the federal government and some states (like California).

Hydrogen for Data Centers

Apple’s Maiden data center is the highest-profile facility implementing fuel cell technology. According to NewsObserver.com (“Apple plans nation’s biggest private fuel cell energy project at N.C. data center”), Apple will generate hydrogen from natural gas and will employ 24 fuel cell modules. The project is slated for an output of 4.8 megawatts—much less than the data center’s total power consumption, but still a sizable output.

The use of natural gas to generate hydrogen still creates carbon emissions, so this project won’t satisfy everyone (although whether carbon dioxide is as bad as its current politicized reputation would suggest is hardly certain). Nevertheless, like Apple’s large solar farm at the same site, this hydrogen fuel cell project will be a good test of the practicability of hydrogen in the context of data centers.

Hydrogen: Will Most Companies Care?

Jumping into the power generation arena is not something most companies (particularly small and midsize companies) can afford to do—let alone have an interest in doing. So, pending availability of some affordable, prepackaged hydrogen fuel cell system, don’t expect most companies to deploy such a project at their data center sites. Currently, large companies like Apple and Google are among the few dabbling in energy in addition to their primary business. Most companies will, no doubt, prefer to simply plug their data centers into the local utility and let someone else worry about where the energy comes from—these companies wish to focus on their primary business interests.

Conclusion: What Exactly Does Hydrogen Mean for the Data Center?

Hydrogen fuel cells offer some major improvements in controlling emissions, and hydrogen delivers some benefits as a means of storing and transporting energy. But fuel cell technology lacks the economic efficiency of other, traditional power sources, so it has a ways to go before it can attain the status of coal or nuclear, or even smaller-scale sources like solar. Furthermore, the applicability of hydrogen (as such) to data centers is unclear. Power backup seems the most likely present candidate for application of hydrogen and fuel cells. In time, Apple’s project may demonstrate the practicality of electricity via natural gas as another possibility. Until then, however, the industry must wait to see whether this technology matures—and becomes economically feasible.

Photo courtesy of Zero Emission Resource Organisation

Wednesday, May 9, 2012

Open #DataCenter Alliance Announces that UBS Chief Technology Officer Andy Brown will Keynote at Forecast 2012

Rackspace CTO John Engates to Deliver Openstack Industry Perspective, New Big Data Panel Joins Forecast 2012 Agenda

 

 

 

 

 

PORTLAND, Ore., May 09, 2012 (BUSINESS WIRE) -- The Open Data Center Alliance (ODCA) today announced that UBS Chief Technology Officer Andy Brown will be a keynote speaker at the Open Data Center Alliance Forecast 2012 event. Brown plans to address enterprise requirements for the cloud and comment on the progress of industry delivery of solutions based on the ODCA usage models. In his role as chief technology officer, Brown is responsible for advancing the investment bank's group architecture, simplifying the application and infrastructure landscape, and improving the quality of UBS's technical solutions.

In related news, the organization also announced the addition of Rackspace Chief Technology Officer, John Engates, to the event's agenda with the delivery of an Industry Perspective session on the role of industry standard delivery of cloud solutions. Engates will focus his discussion on the role of Openstack in cloud solution delivery and its alignment with the objectives of open solutions required by ODCA. A Big Data panel was also added to the agenda featuring panelists from Cloudera, Intel, SAS and Teradata following last week's announcement of a new ODCA Data Services workgroup. The organization also announced a host of executive IT experts to be featured in panels at the event. Held on June 12, in conjunction with the 10th International Cloud Expo in New York City, ODCA Forecast 2012 will bring together hundreds of members of the Alliance, industry experts and leading technology companies to showcase how ODCA usage model adoption can accelerate the value that cloud computing represents to organizations through increased efficiency and agility of IT services.

"The Forecast agenda features a who's who of enterprise IT leaders, all of whom are assembling to share their best insights in deploying cloud services," said Marvin Wheeler, ODCA chair. "Adding CTOs on the caliber of Andy Brown and John Engates to our agenda underscores the high regard that both the organization and our first event are generating. For organizations considering cloud deployments in 2012, this is a rare opportunity to learn from their peers and see the latest in solutions advancements."

ODCA Forecast 2012 will feature sessions on top issues associated with cloud deployment including security, service transparency, and industry standard delivery of solutions. The Big Data panel complements planned panels featuring the first public discussions of charter and progress of the organization's recently formed Data Services workgroup. With leading experts from enterprise IT, service providers and the data center industry on hand to discuss the top issues and opportunities offered by cloud computing, attendees will have a rare opportunity to network with leading thinkers and gain critical knowledge to help shape their own cloud deployments. Alliance solutions provider members will also showcase products that have been developed within the guidelines of Alliance usage models.

Leading managers from some of the largest global IT shops have formed the group of panelists for the event, and today the Alliance is announcing several new panelists who will share their expertise across areas impacting the cloud, including security, management and regulation. The Cloud Security Panel will now feature Dov Yoran, a founding member of the Cloud Security Alliance. Ray Solnik, president of Appnomic Systems, a leading provider of automated Cloud IT performance management solutions has joined the Cloud Management Panel. The Cloud Regulation Panel is pleased to welcome Gordon Haff, senior cloud strategy marketing and evangelism manager with Red Hat. Haff will also be part of the Cloud Software Panel. Jeff Deacon, chief cloud strategist with Terramark, will be part of the Service Provider Panel. The Cloud Hardware Panel will feature John Igoe, executive director, development engineering for Dell. Other new panelists could be found at www.opendatacenteralliance.org/forecast2012 .

Forecast 2012 is supported by the following sponsors: Gold Sponsors Dell, Hewlett Packard and Intel Corp, silver sponsor Red Hat, Pavilion Sponsor Citrix and McAfee and breakfast sponsor Champion Solutions Group. Media and collaborating organization sponsors include Cloud Computing Magazine, Cloud Security Alliance, CloudTimes, the Distributed Management Task Force (DMTF), the Green Grid, InformationWeek, Open Compute Project, Organization for the Advancement of Structured Information Standards (OASIS), SecurityStockWatch.com and Tabor Communications.

All Forecast attendees will also receive a complimentary pass to International Cloud Expo as part of their ODCA Forecast 2012 registration representing a tremendous value for Forecast attendees. For more information on the Alliance, or to register for ODCA Forecast 2012, please visit www.opendatacenteralliance.org .

About The Open Data Center Alliance

The Open Data Center Alliancea" is an independent IT consortium comprised of global IT leaders who have come together to provide a unified customer vision for long-term data center requirements. The Alliance is led by a twelve member Board of Directors which includes IT leaders BMW, Capgemini, China Life, China Unicom, Deutsche Bank, JPMorgan Chase, Lockheed Martin, Marriott International, Inc., National Australia Bank, Terremark, Disney Technology Solutions and Services and UBS. Intel serves as technical advisor to the Alliance.

In support of its mission, the Alliance has delivered the first customer requirements for cloud computing documented in eight Open Data Center Usage Models which identify member prioritized requirements to resolve the most pressing challenges facing cloud adoption. Find out more at www.opendatacenteralliance.org .

SOURCE: The Open Data Center Alliance

Tuesday, May 8, 2012

Operations-as-a-Service (or IaaS + PaaS + SMEs)

Guest Post from Richard Donaldson

 

I’d been holding out a bit on writing this as it really is a synthesis of ideas (aren’t they all) with special mention of dialogue with Jeffrey Papen of Peak Hosting (www.peakwebhosting.com)…

I’ve been collaborating and speaking extensively with Jeffrey on the next phase of “hosting” since we are now moving beyond the hype cycle of “Cloud Computing” (see previous post on “The end of the Cloud Era”).  The community at large (and people in general) love the idea of simple, bite sized “solutions” with pithy and “sexy” naming conventions (think <30sec sound bites) and that was the promise/expectation around “the cloud” as it was popularized – a magic all in one solution whereby you just add applications and the “cloud” will do the rest. Yet, the promise never quite met expectations as the “cloud” really ended up being an open standards evolution of “virtualization” – nothing wrong with that, just not the “all in one” solution that people really wanted the cloud to be (ps – all in one refers to the aforementioned of applications just being pushed thru APIs to the “cloud” and the “cloud” manages all underlying resources).

So, as the Cloud Hype dissipates (love the metaphor), we are sorta back to the same basic elements that make up Infrastructure – Datacetners, Compute (IT), Communications (switches/routers), Software that manages it all (virtualization, cloud, etc), all accessible thru the to be built APIs.  Put another way, we are coming full circle and back to centralized, on-demand computing that needs one more element to make it all work – Subject Matter Experts (SMEs).

I was inspired to write this today when I saw this post from Hitachi: http://www.computerworld.com/s/article/9226920/Hitachi_launches_all_in_one_data_center_service - “Japanese conglomerate Hitachi on Monday launched a new data center business that includes everything from planning to construction to IT support.  Hitachi said its new “GNEXT Facility & IT Management Service” will cover consulting on environmental and security issues, procurement and installation of power, cooling and security systems, and ongoing hardware maintenance. It will expand to include outsourcing services for software engineers and support for clearing regulatory hurdles and certifications.”  This is the comprehensive “build to suit” solutions the market has been seeking since the cloud – it includes everything to get your infrastructure building blocks right and is provided as a service – but what do we call this service????

How about “Operations-as-a-Service“!!

Image

OaaS pulls together the elements in IaaS + PaaS + SMEs.  It outsources the “plumbing” to those that can make it far more cost effective thru economies of scale.  Sure, there are a select few companies who will do this all in house: Google, eBay, Microsoft, Amazon, Apple (trying), and of course, Zynga.  Yet, these companies are at such massive scale that it makes sense – and yet, they even have excess (at least they should) capacity which is why AWS was born in the first place and we are now seeing Zynga open up to allow gamers to use their platform (see: http://www.pocketgamer.biz/r/PG.Biz/Zynga+news/news.asp?c=38455).  Yet these are the exceptions and not the rule.

The rest of the world should and is seeking comprehensive, end-to-end Operations as a Service provided by single vendors.  It doesn’t preclude the market place from buying discreet parts of OaaS individually, however, the dominant companies that will begin to emerge in this next decade will seek to add more and more of the OaaS solutions set to their product list thereby catalyzing a lot (I mean a lot) of consolidation.

I will be following up this blog with a more detailed look at how this concept is playing out, yet in the mean time would very much like to hear the feed back on this topic – is the world looking for OaaS?

rd

Original Post: http://rhdonaldson.wordpress.com/2012/05/07/operations-as-a-service-or-iaas-p...

Friday, May 4, 2012

Patent Wars may Chill #DataCenter Innovation

May 4, 2012 by mmanos Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that : In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers. While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure.  With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water.  In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?   After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.   We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable? Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

Thursday, May 3, 2012

Power Dense Data Centers Seek Thermal Controls

Jeff Klaus is director, Intel Data Center Manager (DCM) Solutions. Jeff leads a global team that designs, builds, sells and supports Intel DCM.JEFF KLAUSIntelYour data center has a maximum power capacity that must cover both server and IT device power consumption and thermal cooling requirements. Balancing these two rivaling demands has become more difficult in recent years as data center power consumption has increased from an average of 500 watts per square foot to today’s average of 1,500 watts per square foot! The thermal effect of more high-performance-density (HPD) hardware has frequently led to greater data center heat production.One way to address this increase in data center heat production is to achieve a more efficient thermal cooling infrastructure[1] using the following emerging best practices for thermal monitoring and control.Build Real-time Thermal Data Center MapsReal-time thermal sensors on every server platform enable building real-time thermal maps of the data center. Real-time monitoring of power and thermal events in individual servers and racks, in addition to sections of rooms, enables you, as the manager, to proactively identify failure situations, and then take action based on the specific situations, be they over-cooling, under-cooling, hot spots and/or computer room air conditioning (CRAC) failures.These maps can then be used to create thermal profiles that record and report thermal trends and events for long-term planning as well.By monitoring and controlling CRAC supply temperatures based on real-time data center ambient inlet temperature, you can further identify hotspots. Both over-cooling and under-cooling are frequently due to the lack of information regarding actual ambient temperatures for data center racks and room levels.Further, a real-time thermal profile map reports the thermal trends to justify how much to increase operational temperatures for a potentially significant reduction in cooling energy costs. In several pilot projects with data centers located around the world, the Intel Data Center Manager (DCM) Solutions team has witnessed that increasing energy efficiency and raising the temperature based on accurate readings can net $50,000/year in savings for every degree the data center is raised.Actual v. Theoretical DataMuch of today’s available power and thermal data is based on estimated or manufacturers’ power ratings (name plates), not on actual consumption. This data can deviate from consumption by as much as 40 percent.Real-time rack- and room-level thermal mapping identifies cooling efficiency that enables you to activate the appropriate cooling action needed sooner rather than later. Sun Microsystems reported data center managers can save four percent in energy costs for every degree of upward change in the temperature set point.[2] It is also reported that cooling can account for 40-50 percent of the total amount of energy used in data centers. Improving efficiencies in this area has a significant impact on the overall operating costs.Case StudyMicrosoft wanted to find out how much money can be saved by raising the cooling set point in the data center. The company tested the impact of slightly higher temperatures in its Silicon Valley data center. “We raised the floor temperature two to four degrees, and saved $250,000 in annual energy costs,” said Don Denning, Critical Facilities Manager at Lee Technologies, which worked with Microsoft on the project.[3]When CIOs and their facilities teams wrestle with their HPD data centers’ rivaling demands for more server power and greater thermal cooling efficiencies, ensure these best practices are part of your thermal controls’ planning.

Wednesday, April 25, 2012

Cold Aisle Containment System Performance Simulation

By: Michael Potts
April 25th, 2012

In an attempt to reduce inlet temperatures, BayCare Health System in Tampa Florida installed a cold aisle containment system (CACS) in a section of their data center. Results were varied, with temperatures improving in some areas, but actually increasing in others. In order to understand these results, airflow management solutions provider Eaton simulated the data center’s performance using Future Facilities’ 6SigmaDC computational fluid dynamics (CFD). The simulation’s results matched those of the physical data center. With the information at hand, it was determined that CFD software could be used to diagnose the data centers cooling problems.


Future Facilities Website: http://www.futurefacilities.com/

This paper from Eaton details the results of their CFD diagnosis of the BayCare facility, describing the process of analysis in depth, as well as offering solutions to the cooling infrastructure. First, the process of cold aisle containment installation is outlined, offering details from both the simulation and study of the physical data center. Next, it explains Eaton’s performance simulation and measurement of the facility’s function after installation, mimicking the center’s airflow, device models and locations, as well as temperature. Lastly, a framework of the full diagnoses is presented, offering conclusions to the unexpected temperature increases.

Learn the full process of data center diagnosis in this detailed simulation. Click here to download this paper from Eaton on the diagnosis of the incorrectly performing cold aisle containment system at BayCare Health System.

11080763-ffl-logo

Data Center Executives Must Address Many Issues in 2012

Analyst(s): Mike Chuba

VIEW SUMMARY

Seemingly insatiable demand for new workloads and services at a time when most budgets are still constrained is the challenge of most data center executives. We look at the specific areas they identified going into 2012.

Overview

Data center executives are caught in an awkward phase of the slow economic recovery, as they try to support new initiatives from the business without a commensurate increase in their budgets. Many will need to improve the efficiency of their workloads and infrastructure to free up money to support these emerging initiatives.

Key Findings

  • Data center budgets are not growing commensurate with demand.
  • Expect an 800% growth in data over the next five years, with 80% of it being unstructured.
  • Tablets will augment desktop and laptop computers, not replace them.
  • Data centers can consume 100 times more energy than the offices they support.
  • The cost of power is on par with the cost of the equipment.

Recommendations

  • It is not the IT organization's job to arrest the creation or proliferation of data. Rather, data center managers need to focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.
  • Focus short term on cooling, airflow and equipment placement to optimize data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency.
  • Put in place security, data storage and usage guidelines for tablets and other emerging form factors in the short term, while deciding on your long-term objectives for support.
  • Use a business impact analysis to determine when, where and why to adopt cloud computing.

What You Need to Know

New workloads that are key to enterprise growth, latent demand for existing workloads as the general economy recovers, increased regulatory demands and the explosion in data growth all pose challenges for data center executives at a time when the budget is not growing commensurate with demand. Storage growth continues unabated. It is not unusual to hear sustained growth rates of 40% or more per year. To fund this growth, most organizations will have to reallocate their budgets from other legacy investment buckets. At the same time, they must focus on storage optimization to manage demand, availability and efficiency.

Analysis

"Nothing endures but change" is a quote attributed to Heraclitus, who lived over 2,500 years ago. However, his words seem applicable to the data center executive today. Pervasive mobility, a business environment demanding access to anything, anytime, anywhere and the rise of alternative delivery models, such as cloud computing, have placed new pressures on the infrastructure and operations (I&O) organization for support and speed. At the same time, a fitful economic environment has not loosened the budget purse strings sufficiently to fund all the new initiatives that many I&O organizations have identified.

This challenge of supporting today's accelerated pace of change, and delivering the efficiency, agility and quality of services their business needs to succeed was top of mind for the more than 2,600 data center professionals gathered in Las Vegas on 5 December to 8 December 2011 for the annual Gartner U.S. Data Center Conference. It was a record turnout for this annual event, now in its 30th year. Our conference theme, "Heightened Risk, Unbounded Opportunities, Managing Complexity in the Data Center," spoke to the difficult task our attendees face while addressing the new realities and merging business opportunities at a time when the economic outlook is still uncertain. The data center is being reshaped, as the transformation of IT into a service business has begun.

Our agenda reflected the complex, interrelated challenges confronting attendees. Attendance was particularly strong for the cloud computing and data center track sessions, followed by the storage, virtualization and IT operations track. The most popular analyst-user roundtables focused on these topics, and analysts in these spaces were in high demand for one-on-one meetings. We believe that the best-attended sessions and the results of the surveys conducted at the conference represent a reasonable benchmark for the kinds of issues that organizations will be dealing with in 2012.

We added a new track this year focused on the impact of mobility on I&O. The rapid proliferation of smart devices, such as tablets and smartphones, is driving dramatic changes in business and consumer applications and positively impacting bottom-line results. Yet, I&O plays a critical role in supporting these applications rooted in real-time access to corporate data anytime and anywhere and in any context, while still providing traditional support to the existing portfolio of applications and devices. As the next billion devices wanting access to corporate infrastructure are deployed, I&O executives have an opportunity to exhibit leadership and innovation — from contributing to establishing corporate standards, to anticipating the impact on capacity planning, to minimizing risk.

Electronic interactive polling is a significant feature of the conference, allowing attendees to get instantaneous feedback on what their peers are doing. The welcome address posed a couple of questions that set the tone for the conference. Attendees were first asked how their 2012 I&O budgets compared with their previous years' budgets (see Figure 1).

Figure 1. Budget Change in Coming Year vs. Current Year Spending
Figure 1. Budget Change in Coming Year vs. Current Year Spending

Source: Gartner (January 2012)

Comparing year-over-year data, we find almost identical numbers reporting budgetary growth (42%) and reduced budgets (26% vs. 25%). The most recent results reflect a gradual, but still challenging, economic climate. While hardly robust, it is a marked improvement from the somber mood that most end-user organizations were in at the end of 2008 and entering 2009. Subsequent track sessions that focused on cost optimization strategies and best practices were universally well attended throughout the week.

Now, modest budget changes may not be enough to sustain current modes of IT operations, let alone support emerging business initiatives. Organizations need to continue to look closely at improving efficiencies and pruning legacy applications that are on the back side of the cost-benefit equation, to free up the budget and lay the groundwork to support emerging workloads/applications.

The second issue we raised in the opening session was for attendees to identify the most significant data center challenge they will face in 2012, compared with previous years (see Figure 2; note that the voting options changed from year to year).

Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)
Figure 2. Most Significant Data Center Challenge in Coming Year (% of Respondents)

Source: Gartner (January 2012)

What was interesting was the more balanced distribution across the options. For those who have the charter to manage the storage environment, managing storage growth is an extremely challenging issue.

Top Five Challenges

NO. 1: DATA GROWTH

Data growth continues unabated, leaving IT organizations struggling to deal with how to fund the necessary storage capacity, how to manage these devices if they can afford them, and how they can archive and back up this data. Managing and storing massive volumes of complex data to support real-time analytics is increasingly becoming a requirement for many organizations, driving the need for not just capacity, but also performance. New technologies, architectures and deployment models can enable significant changes in storage infrastructure and management best practices now and in coming years, and assist in addressing these issues. We believe that it is not the job of IT to arrest the creation or proliferation of data. Rather, IT should focus on storage utilization and management to contain growth and minimize floor space, while improving compliance and business continuity efforts.

Tactically prioritize a focus on deleting data that has outlived its usefulness, and exploit technologies that allow for the reduction of redundant data.

NO. 2: DATA CENTER SPACE, POWER AND/OR COOLING

It is not surprising that data center space, power and/or cooling was identified as the second biggest challenge by our attendees. Data centers can consume 100 times more energy than the offices they support, which draws more budgetary attention in uncertain times. During the past five years, the power demands of equipment have grown significantly, imposing enormous pressures on the capacity of data centers that were built five or more years ago. Data center managers are grappling with cost, technology, environmental, people and location issues, and are constantly looking for ways to deliver a highly available, secure, flexible server infrastructure as the foundation for the business's mission-critical applications. On top of this is the building pressure to create a green environment. Our keynote interview with Frank Frankovsky, director of hardware design and supply chain at Facebook, drew considerable interest because of some of the novel approaches that company was taking to satisfy its rather unique computing requirements.

We recommend that data center executives focus short term on cooling, airflow and equipment placement to optimize their data center space, while developing a long-term data center design strategy that maximizes flexibility, scalability and efficiency. We believe that the decline in priority shown in the survey results reflects the fact that organizations have been focusing on improved efficiency of their data centers. Changes are being implemented and results are being achieved.

NO. 3: PRIVATE/PUBLIC CLOUD STRATEGY

Developing a private/public cloud strategy was the third most popular choice as the top priority, and mirrors the results we have seen in Gartner's separate surveys regarding the top technology priorities of CIOs. With many organizations well on their way to virtualized infrastructures, many are now either actively moving toward, or being pressured to move toward, cloud-based environments. Whether it is public, private or some hybrid version of cloud, attendees' questions focused on where do you go, how do you get there, and how fast should you move toward cloud computing.

We recommend that organizations develop a business impact analysis to determine when, where and why to adopt cloud computing. Ascertain where migrating or enhancing applications can deliver value, and look for the innovative applications that could benefit from unique cloud capabilities.

NO. 4 AND NO. 5: BUSINESS NEEDS

"Modernizing of our legacy applications" was fourth as the greatest challenge, and "Identifying and translating business requirements" was fifth and, in many ways, both relate to similar concerns. Meeting business priorities; aligning with shifts in the business; and bringing much-needed agility to legacy applications that might require dramatic shifts in architectures, processes and skill sets were common concerns among Data Center Conference attendees, in general.

We believe virtualization's decline as a top challenge reflects the comfort level that attendees have in the context of x86 server virtualization, and most of this conference's attendees are well down that path — primarily with VMware, but increasingly with other vendors as well. Our clients see the private cloud as an extension of their virtualization efforts; thus, interest in virtualization isn't waning, but is evolving to private cloud computing. Now is a good time to evaluate your virtualization "health" — processes, management standards and automation readiness. For many organizations, it is an appropriate time to benchmark their current virtualization approach against competitors and alternate providers, and broaden their virtualization initiatives beyond just the servers and across the portfolio — desktop, storage, applications, etc.

This year promises to be one of further market disruption and rapid evolution. Vendor strategies will be challenged and new paradigms will continue to emerge. To stay ahead of the industry curve, plan to join your peers at the 2012 U.S. Data Center Conference on 3 December to 6 December in Las Vegas.