Showing posts with label IT. Show all posts
Showing posts with label IT. Show all posts

Sunday, April 7, 2013

Where is the open #datacenter facility API ?


For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX
Treat the DC infrastructure as an IT system;
- Converge in the infrastructure instrumentation and control systems
- Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components
With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.
This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.
So if we have the ‘integrated control systems’, what would we be able to do?
We could:
  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.
Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.
This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)
I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.
So what does this API need to be ?
First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.
This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.
A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.
Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.
These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…
(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)
It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.
All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…
So what information should this API be able to feed ?
Most information would be nice to have in near real time :
  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))
(all if and where applicable and available)
The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.
So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?
More:

Original Article: http://datacenterpulse.org/blogs/jan.wiersma/where_open_datacenter_facility_api

Tuesday, August 21, 2012

A Greener Field: SDNs: Love 'em or Leave 'em?

Are Software Defined Networks (SDN) on your short list yet? Two recent surveys suggest it depends on where you work. I say that given our recent experiences with SDN, within five years and everyone will adopt or have definitive plans to adopt SDN technology. Here’s why:

SDNs, for those who’ve been sleeping with the pinecones of late, move intelligence previously locked within proprietary switching and routing hardware into open software. To put that in “networkese,” we separate the control plane from the forwarding plane using a protocol like OpenFlow. As such, SDN delivers on eight benefits:

  • Agility. OpenFlow-based SDNs create flexibility in how the network is used, operated, and sold. The software that governs it can be written by enterprises and service providers using ordinary software environments.
  • Speed. SDNs promote rapid service introduction through customization, because network operators can implement the features they want in software they control, rather than having to wait for a vendor to put it into plans for their proprietary products.
  • Cost Savings. Software Defined Networking lowers operating expenses and results in fewer errors and less network downtime because it enables automated configuration of the network and reduces manual configuration.
  • Better Management. OpenFlow-based SDN enables virtualization of the network, and therefore the integration of the network with computing and storage. This allows the entire IT operation to be governed more sleekly with a single viewpoint and toolset.
  • Planning. OpenFlow can be easily integrated with computing for resource management and maintenance.
  • Focus. OpenFlow-based SDN can better align the network with business objectives.
  • More Competition. As a standard way of conveying flow-table information to the network devices, it fosters open, multi-vendor markets.
  • Table Conversation. Instead of saying you working in networking at the next party, now you can say “I program the enterprise.” Now, how cool is that?

It’s no wonder in that a recent survey of service providers, Infonetics Research found that 80 percent of survey respondents are including OpenFlow in their purchase considerations. Rapid delivery of new services is essential for service providers and it’s a key benefit to SDN.

"While many uncertainties surround SDNs due to their newness, our survey confirms that the programmable network movement is real, with a good majority of service providers worldwide considering or planning purchases of SDN technologies to simplify network provisioning and to create services and virtual networks in ways not previously possible,” notes Michael Howard, co-founder and principal analyst for carrier networks at Infonetics Research.

On the more “curmudgeonary” side of life you have Mike Fratto noting that enterprises haven’t yet gone gaga over the whole SDN thing. “Enterprises aren't really ready for SDN quite yet, as the results from a recent InformationWeek survey of 250 IT professionals showed. Some 70% of respondents said they weren't even going to start testing SDN for at least a year.”

So which is it for you? Are you a service provider kind of guy or an enterprise kind of gal?

Wednesday, May 9, 2012

Open #DataCenter Alliance Announces that UBS Chief Technology Officer Andy Brown will Keynote at Forecast 2012

Rackspace CTO John Engates to Deliver Openstack Industry Perspective, New Big Data Panel Joins Forecast 2012 Agenda

 

 

 

 

 

PORTLAND, Ore., May 09, 2012 (BUSINESS WIRE) -- The Open Data Center Alliance (ODCA) today announced that UBS Chief Technology Officer Andy Brown will be a keynote speaker at the Open Data Center Alliance Forecast 2012 event. Brown plans to address enterprise requirements for the cloud and comment on the progress of industry delivery of solutions based on the ODCA usage models. In his role as chief technology officer, Brown is responsible for advancing the investment bank's group architecture, simplifying the application and infrastructure landscape, and improving the quality of UBS's technical solutions.

In related news, the organization also announced the addition of Rackspace Chief Technology Officer, John Engates, to the event's agenda with the delivery of an Industry Perspective session on the role of industry standard delivery of cloud solutions. Engates will focus his discussion on the role of Openstack in cloud solution delivery and its alignment with the objectives of open solutions required by ODCA. A Big Data panel was also added to the agenda featuring panelists from Cloudera, Intel, SAS and Teradata following last week's announcement of a new ODCA Data Services workgroup. The organization also announced a host of executive IT experts to be featured in panels at the event. Held on June 12, in conjunction with the 10th International Cloud Expo in New York City, ODCA Forecast 2012 will bring together hundreds of members of the Alliance, industry experts and leading technology companies to showcase how ODCA usage model adoption can accelerate the value that cloud computing represents to organizations through increased efficiency and agility of IT services.

"The Forecast agenda features a who's who of enterprise IT leaders, all of whom are assembling to share their best insights in deploying cloud services," said Marvin Wheeler, ODCA chair. "Adding CTOs on the caliber of Andy Brown and John Engates to our agenda underscores the high regard that both the organization and our first event are generating. For organizations considering cloud deployments in 2012, this is a rare opportunity to learn from their peers and see the latest in solutions advancements."

ODCA Forecast 2012 will feature sessions on top issues associated with cloud deployment including security, service transparency, and industry standard delivery of solutions. The Big Data panel complements planned panels featuring the first public discussions of charter and progress of the organization's recently formed Data Services workgroup. With leading experts from enterprise IT, service providers and the data center industry on hand to discuss the top issues and opportunities offered by cloud computing, attendees will have a rare opportunity to network with leading thinkers and gain critical knowledge to help shape their own cloud deployments. Alliance solutions provider members will also showcase products that have been developed within the guidelines of Alliance usage models.

Leading managers from some of the largest global IT shops have formed the group of panelists for the event, and today the Alliance is announcing several new panelists who will share their expertise across areas impacting the cloud, including security, management and regulation. The Cloud Security Panel will now feature Dov Yoran, a founding member of the Cloud Security Alliance. Ray Solnik, president of Appnomic Systems, a leading provider of automated Cloud IT performance management solutions has joined the Cloud Management Panel. The Cloud Regulation Panel is pleased to welcome Gordon Haff, senior cloud strategy marketing and evangelism manager with Red Hat. Haff will also be part of the Cloud Software Panel. Jeff Deacon, chief cloud strategist with Terramark, will be part of the Service Provider Panel. The Cloud Hardware Panel will feature John Igoe, executive director, development engineering for Dell. Other new panelists could be found at www.opendatacenteralliance.org/forecast2012 .

Forecast 2012 is supported by the following sponsors: Gold Sponsors Dell, Hewlett Packard and Intel Corp, silver sponsor Red Hat, Pavilion Sponsor Citrix and McAfee and breakfast sponsor Champion Solutions Group. Media and collaborating organization sponsors include Cloud Computing Magazine, Cloud Security Alliance, CloudTimes, the Distributed Management Task Force (DMTF), the Green Grid, InformationWeek, Open Compute Project, Organization for the Advancement of Structured Information Standards (OASIS), SecurityStockWatch.com and Tabor Communications.

All Forecast attendees will also receive a complimentary pass to International Cloud Expo as part of their ODCA Forecast 2012 registration representing a tremendous value for Forecast attendees. For more information on the Alliance, or to register for ODCA Forecast 2012, please visit www.opendatacenteralliance.org .

About The Open Data Center Alliance

The Open Data Center Alliancea" is an independent IT consortium comprised of global IT leaders who have come together to provide a unified customer vision for long-term data center requirements. The Alliance is led by a twelve member Board of Directors which includes IT leaders BMW, Capgemini, China Life, China Unicom, Deutsche Bank, JPMorgan Chase, Lockheed Martin, Marriott International, Inc., National Australia Bank, Terremark, Disney Technology Solutions and Services and UBS. Intel serves as technical advisor to the Alliance.

In support of its mission, the Alliance has delivered the first customer requirements for cloud computing documented in eight Open Data Center Usage Models which identify member prioritized requirements to resolve the most pressing challenges facing cloud adoption. Find out more at www.opendatacenteralliance.org .

SOURCE: The Open Data Center Alliance

Tuesday, May 8, 2012

Operations-as-a-Service (or IaaS + PaaS + SMEs)

Guest Post from Richard Donaldson

 

I’d been holding out a bit on writing this as it really is a synthesis of ideas (aren’t they all) with special mention of dialogue with Jeffrey Papen of Peak Hosting (www.peakwebhosting.com)…

I’ve been collaborating and speaking extensively with Jeffrey on the next phase of “hosting” since we are now moving beyond the hype cycle of “Cloud Computing” (see previous post on “The end of the Cloud Era”).  The community at large (and people in general) love the idea of simple, bite sized “solutions” with pithy and “sexy” naming conventions (think <30sec sound bites) and that was the promise/expectation around “the cloud” as it was popularized – a magic all in one solution whereby you just add applications and the “cloud” will do the rest. Yet, the promise never quite met expectations as the “cloud” really ended up being an open standards evolution of “virtualization” – nothing wrong with that, just not the “all in one” solution that people really wanted the cloud to be (ps – all in one refers to the aforementioned of applications just being pushed thru APIs to the “cloud” and the “cloud” manages all underlying resources).

So, as the Cloud Hype dissipates (love the metaphor), we are sorta back to the same basic elements that make up Infrastructure – Datacetners, Compute (IT), Communications (switches/routers), Software that manages it all (virtualization, cloud, etc), all accessible thru the to be built APIs.  Put another way, we are coming full circle and back to centralized, on-demand computing that needs one more element to make it all work – Subject Matter Experts (SMEs).

I was inspired to write this today when I saw this post from Hitachi: http://www.computerworld.com/s/article/9226920/Hitachi_launches_all_in_one_data_center_service - “Japanese conglomerate Hitachi on Monday launched a new data center business that includes everything from planning to construction to IT support.  Hitachi said its new “GNEXT Facility & IT Management Service” will cover consulting on environmental and security issues, procurement and installation of power, cooling and security systems, and ongoing hardware maintenance. It will expand to include outsourcing services for software engineers and support for clearing regulatory hurdles and certifications.”  This is the comprehensive “build to suit” solutions the market has been seeking since the cloud – it includes everything to get your infrastructure building blocks right and is provided as a service – but what do we call this service????

How about “Operations-as-a-Service“!!

Image

OaaS pulls together the elements in IaaS + PaaS + SMEs.  It outsources the “plumbing” to those that can make it far more cost effective thru economies of scale.  Sure, there are a select few companies who will do this all in house: Google, eBay, Microsoft, Amazon, Apple (trying), and of course, Zynga.  Yet, these companies are at such massive scale that it makes sense – and yet, they even have excess (at least they should) capacity which is why AWS was born in the first place and we are now seeing Zynga open up to allow gamers to use their platform (see: http://www.pocketgamer.biz/r/PG.Biz/Zynga+news/news.asp?c=38455).  Yet these are the exceptions and not the rule.

The rest of the world should and is seeking comprehensive, end-to-end Operations as a Service provided by single vendors.  It doesn’t preclude the market place from buying discreet parts of OaaS individually, however, the dominant companies that will begin to emerge in this next decade will seek to add more and more of the OaaS solutions set to their product list thereby catalyzing a lot (I mean a lot) of consolidation.

I will be following up this blog with a more detailed look at how this concept is playing out, yet in the mean time would very much like to hear the feed back on this topic – is the world looking for OaaS?

rd

Original Post: http://rhdonaldson.wordpress.com/2012/05/07/operations-as-a-service-or-iaas-p...

Monday, April 30, 2012

Driving Under the Limit: Data Center Practices That Mitigate Power Spikes

 April 30, 2012

 

Every server in a data center runs on an allotted power cap that is programmed to withstand the peak-hour power consumption level. When an unexpected event causes a power spike, however, data center managers can be faced with serious problems. For example, in the summer of 2011, unusually high temperatures in Texas created havoc in data centers. The increased operation of air conditioning units affected data center servers that were already running close to capacity.

Preparedness for unexpected power events requires the ability to rapidly identify the individual servers at risk of power overload or failure. A variety of proactive energy management best practices can not only provide insights into the power patterns leading up to problematic events, but can offer remedial controls that avoid equipment failures and service disruptions.

Best Practice: Gaining Real-Time Visibility

Dealing with power surges requires a full understanding of your nominal data center power and thermal conditions. Unfortunately, many facilities and IT teams have only minimal monitoring in place, often focusing solely on return air temperature at the air-conditioning units.

The first step toward efficient energy management is to take advantage of all the power and thermal data provided by today’s hardware. This includes real-time server inlet temperatures and power consumption data from rack servers, blade servers, and the power-distribution units (PDUs) and uninterrupted power supplies (UPSs) related to those servers. Data center energy monitoring solutions are available for aggregating this hardware data and for providing views of conditions at the individual server or rack level or for user-defined groups of devices.

Unlike predictive models that are based on static data sets, real-time energy monitoring solutions can uncover hot spots and computer-area air handler (CRAH) failures early, when proactive actions can be taken.

By aggregating server inlet temperatures, an energy monitoring solution can help data center managers create real-time thermal maps of the data center. The solutions can also feed data into logs to be used for trending analysis as well as in-depth airflow studies for improving thermal profiles and for avoiding over- or undercooling. With adequate granularity and accuracy, an energy monitoring solution makes it possible to fine-tune power and cooling systems, instead of necessitating designs to accommodate the worst-case or spike conditions.

Best Practice: Shifting From Reactive to Proactive Energy Management

Accurate, real-time power and thermal usage data also makes it possible to set thresholds and alerts, and it introduce controls that enforce policies for optimized service and efficiencies. Real-time server data provides immediate feedback about power and thermal conditions that can affect server performance and ultimately end-user services.

Proactively identifying hot spots before they reach critical levels allows data center managers to take preventative actions and also creates a foundation for the following:

  •  Managing and billing for services based on actual energy use
  • Automating actions relating to power management in order to minimize the impact on IT or facilities teams
  • Integrating data center energy management with other data center and facilities management consoles.

Best Practice: Non-Invasive Monitoring

To avoid affecting the servers and end-user services, data center managers should look for energy management solutions that support agentless operation. Advanced solutions facilitate integration, with full support for Web Services Description Language (WSDL) APIs, and they can coexist with other applications on the designated host server or virtual machine.

Today’s regulated data centers also require that an energy management solution offer APIs designed for secure communications with managed nodes.

Best Practice: Holistic Energy Optimization

Real-time monitoring provides a solid foundation for energy controls, and state-of-the-art energy management systems provide enable dynamic adjustment of the internal power states of data center servers. The control functions support the optimal balance of server performance and power—and keep power under the cap to avoid spikes that would otherwise exceed equipment limits or energy budgets.

Intelligent aggregation of data center power and thermal data can be used to drive optimal power management policies across servers and storage area networks. In real-world use cases, intelligent energy management solutions are producing 20–40 percent reductions in energy waste.

These increases in efficiency ameliorate the conditions that may lead to power spikes, and they also enable other high-value benefits including prolonged business continuity (by up to 25 percent) when a power outage occurs. Power can also be allocated on a priority basis during an outage, giving maximum protection to business-critical services.

Intelligent power management for servers can also dramatically increase rack density without exceeding existing rack-level power caps. Some companies are also using intelligent energy management approaches to introduce power-based metering and energy cost charge-backs to motivate conservation and more fairly assign costs to organizational units.

Best Practice: Decreasing Data Center Power Without Affecting Performance

A crude energy management solution might mitigate power surges by simply capping the power consumption of individual servers or groups of servers. Because performance is directly tied to power, an intelligent energy management solution dynamically balances power and performance in accordance with the priorities set by the particular business.

The features required for fine-tuning power in relation to server performance include real-time monitoring of actual power consumption and the ability to maintain maximum performance by dynamically adjusting the processor operating frequencies. This requires a tightly integrated solution that can interact with the server operating system or hypervisor using threshold alerts.

Field tests of state-of-the-art energy management solutions have proven the efficacy of an intelligent approach for lowering server power consumption by as much as 20 percent without reducing performance. At BMW Group,[1]for example, a proof-of-concept exercise determined that an energy management solution could lower consumption by 18 percent and increase server efficiency by approximately 19 percent.

Similarly, by adjusting the performance levels, data center managers can more dramatically lower power to mitigate periods of power surges or to adjust server allocations on the basis of workloads and priorities.

Conclusions

Today, the motivations for avoiding power spikes include improving the reliability of data center services and curbing runaway energy costs. In the future, energy management will likely become more critical with the consumerization of IT, cloud computing and other trends that put increased service—and, correspondingly, energy—demands on the data center.

Bottom line, intelligent energy management is a critical first step to gaining control of the fastest-increasing operating cost for the data center. Plus, it puts a data center on a transition path towards more comprehensive IT asset management. Besides avoiding power spikes, energy management solutions provide in-depth knowledge for data center “right-sizing” and accurate equipment scheduling to meet workload demands.

Power data can also contribute to more-efficient cooling and air-flow designs and to space analysis for site expansion studies. Power is at the heart of optimized resource balancing in the data center; as such, the intelligent monitoring and management of power typically yields significant ROI for best-in-class energy management technology.

Monday, April 2, 2012

Blockbuster Quarter for Data Center Stocks datacenter cloud IT http://bit.ly/H9TVtY

Thursday, January 12, 2012

Emerson Network Power to unify IT and datacenter management with one device DCIM trellis datacenters http://bit.ly/xsOzxT

Wednesday, December 14, 2011

Smart Grid Technology Helps Data Centers Conserve Energy datacenter dcim it cloud
http://bit.ly/slPQ4F

Tuesday, November 15, 2011

Monitoring Energy Efficiency dcim datacenter IT datacenters http://bit.ly/szj6BS

Wednesday, November 2, 2011

The Next Generation of Data Center Infrastructure Management DCIM datacenter datacenters datacentre IT cloud oracle trellis http://ping.fm/eDKXf

Tuesday, October 11, 2011

DCIM DataCenter Infrastructure Management – IT Planning Today for the Future http://bit.ly/r2aQYs

Thursday, August 25, 2011

datacenter dcim Earthquake. Hurricane. What’s the next challenge to your IT uptime? http://bit.ly/nlLB2i

Tuesday, August 23, 2011

The IT Energy Efficiency Imperative http://ping.fm/sbrKd

Thursday, July 28, 2011

Biomass-Powered Data Centers: Next Step for Green IT? http://bit.ly/mWaUiV

Tuesday, July 26, 2011

Survey: Nearly Two-Thirds of IT Managers Will Deploy a Private Cloud in 2011 http://bit.ly/pSL7gw

Tuesday, June 21, 2011

Gulf Bank inaugurates IT data center facility in Hawally http://bit.ly/j54wdb

Monday, June 6, 2011

GM to build data center to serve as IT hub http://zd.net/keq0gN
Consolidating Federal IT Infrastructure Data Center http://bit.ly/lsQ8xb

Thursday, April 21, 2011

Clouds, consolidation, culture: making federal IT less 'horrible' http://zd.net/gtFX8Y

Friday, April 15, 2011

$3 Billion Saved by Federal IT reform http://bit.ly/hXBmeh