Thursday, November 7, 2013

The Virtual Facility your path to Predictive DCIM™

http://www.youtube.com/v/FiJzTpLcDv4?autohide=1&version=3&attribution_tag=SJTBmALUBpKbCc7Rty7nwQ&autohide=1&showinfo=1&feature=share&autoplay=1

Dr. Jonathan Koomey speaks about Predictive Data Center Analysis

http://www.youtube.com/v/NVZNsvH2alg?autohide=1&version=3&attribution_tag=1bVi0lM5Hvhzc3h8lcnRMw&showinfo=1&autohide=1&autoplay=1&feature=share

The Virtual Facility your path to Predictive DCIM™ short

http://www.youtube.com/v/Guzx3OLfQJk?version=3&autohide=1&autoplay=1&feature=share&autohide=1&showinfo=1&attribution_tag=0EdP_UO2dj4BA88X6FM-tQ

Planning and Utilizing your Data Center Capacity Part 1

http://www.youtube.com/v/F2n87y5ojKM?version=3&autohide=1&autohide=1&showinfo=1&feature=share&autoplay=1&attribution_tag=isJUQgBOXm7tLyUpNXQ4pg

Planning and Utilizing your Data Center Capacity Part 1 | DCIM Data Center Infrastructure and Critical Facility News

Planning and Utilizing your Data Center Capacity Part 1 | DCIM Data Center Infrastructure and Critical Facility News

Tuesday, October 8, 2013

Planning for Future Capacity #datacenter | DCIM Data Center Infrastructure and Critical Facility News

Planning for Future Capacity #datacenter | DCIM Data Center Infrastructure and Critical Facility News

Planning for Future Capacity #datacenter

logo-cfs-2x
10/23/2013 1:00:00 PM – 10/23/2013 2:30:00 PM Room: 217D
If there is ever a place where the “bigger is better” philosophy does not apply, it’s in the realm of data centers, where the cost of overbuilding can undermine the benefits of uptime and reliability.
At $10 to $30 Million per MW, data center capacity is a major, capital investment for any company. After the investment has been made, each unit of capacity must be put to good use in the same way that companies need production from each employee.
However, most data centers never fully utilize its capacity potential. In fact, on average 30% of data center capacity is lost to non-optimal management of IT resources. At the same time, underestimating the need for either additional space or equipment can be disastrous to an organization.
This session will explore key considerations in the decision to expand data center capacity, including ways to avoid the common pitfalls of overbuilding, and to cut through the perceived capacity needs to determine the true requirements of a data center based upon an organization’s specific performance, security and reliability needs.
The session will discuss the causes of lost capacity and once lost, how it can be reclaimed. In addition a new methodology based on standard techniques used to design centers will be presented as a means to proactively address lost capacity issues before they are committed to the data center.
ACEprVenn_PredictiveDCIM-300x221
  1. Discuss the cost of overbuilding and its potential impact on uptime and reliability
  2. Review key considerations in the decision to expand data center capacity
  3. Learn how to avoid the common pitfalls of overbuilding
  4. Understand how to determine the true capacity requirements of a data center based upon specific performance, security and reliability needs

Announcing Future Facilities ACE Performance Indicator Presentation and Server Deployment Challenge | DCIM Data Center Infrastructure and Critical Facility News

Announcing Future Facilities ACE Performance Indicator Presentation and Server Deployment Challenge | DCIM Data Center Infrastructure and Critical Facility News

Tuesday, September 3, 2013

What is Predictive #DCIM ( #DataCenter Infrastructure Management )? | DCIM Data Center Infrastructure and Critical Facility News

What is Predictive #DCIM ( #DataCenter Infrastructure Management )? | DCIM Data Center Infrastructure and Critical Facility News

Predictive DCIM – Data Center Infrastructure Management is a very complicated game of Tetris | DCIM Data Center Infrastructure and Critical Facility News

Predictive DCIM – Data Center Infrastructure Management is a very complicated game of Tetris | DCIM Data Center Infrastructure and Critical Facility News

VIDEO – Dr. Jonathan Koomey speaks about Predictive Data Center analysis | DCIM Data Center Infrastructure and Critical Facility News

VIDEO – Dr. Jonathan Koomey speaks about Predictive Data Center analysis | DCIM Data Center Infrastructure and Critical Facility News

White Paper – Billion Dollar Drain: True cost of lost #DataCenter Capacity | DCIM Data Center Infrastructure and Critical Facility News

White Paper – Billion Dollar Drain: True cost of lost #DataCenter Capacity | DCIM Data Center Infrastructure and Critical Facility News

WEBCAST September 26th – Predictive DCIM: A Necessity to Protect Data Center Capacity and IT Service Availability | DCIM Data Center Infrastructure and Critical Facility News

WEBCAST September 26th – Predictive DCIM: A Necessity to Protect Data Center Capacity and IT Service Availability | DCIM Data Center Infrastructure and Critical Facility News

What is #DCIM (Data Center Infrastructure Management)? | DCIM Data Center Infrastructure and Critical Facility News

What is #DCIM (Data Center Infrastructure Management)? | DCIM Data Center Infrastructure and Critical Facility News

Thursday, April 11, 2013

Converting IT pain to operational gain

Summary: Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology.


By  for Transforming the Datacenter | April 11, 2013 — 16:05 GMT (09:05 PDT)
This content is produced by our sponsor Microsoft.  It is not written by ZDNet editorial staff.
Advances in technology are outpacing the adoption of new capabilities. The issue: Large companies can’t keep up with the changes and are stuck running legacy information technology architectures and old datacenter technology. The moving parts involved include:
  • Budget
  • Technical Transformation
  • Architecture
Each of these has significant bearing on the future. We have been riding the leading edge of a technology wave that is changing how we work, the computing devices we use, and the back-end systems that support our operations, and how we adapt to that will separate winners from losers.
The changes have come quickly, and the innovations continue, often faster than big companies can respond. Much of this has happened during a slow economy and at a time when controlling costs is front and center on every IT manager’s to-do list. To further complicate things, government regulations are mandating long-term retention of data, which stresses capacity. Business managers are also demanding more sophisticated analyses of all this information to help them better market to customers. To top it off, ad hoc design changes generally lead to increased complexity over time, which increases the probability of error and triggers additional operational costs.
While these are not trivial matters, there is plenty of hope despite the challenges.
IT decision-makers can even achieve transformation to some extent with their current infrastructures for now, as long as there is some flexibility and agility baked in, and the platform is stable.
As the economy begins to improve and companies can no longer ignore the depth of technical transformation that has occurred in recent years, many executives have become impatient at the underperformance of their datacenters. While there are no quick fixes, the good news is that by taking a fresh look at IT strategy and the design of the datacenter, it becomes clear that datacenter transformation is possible.
Transformation is essential to remain competitive in today’s fast-moving world. The way I see it, the path forward is to not think in terms of patching the datacenter. Organizations instead need to assess their IT needs in the context of longer-term business objectives and act accordingly.
Let’s admit it: it is no small endeavor. Your datacenter needs to help you acquire and analyze the business intelligence you need to run smart operations. You need to be able to support a mobile workforce and communicate with mobile customers. You will also need to learn how best to manage those communications, as well as support and leverage machine-to-machine communications.
The silver lining is that addressing these needs is the first step in architecting the datacenter that meets your operational requirements and then you can think about what’s next.

Sunday, April 7, 2013

Where is the open #datacenter facility API ?


For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX
Treat the DC infrastructure as an IT system;
- Converge in the infrastructure instrumentation and control systems
- Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components
With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.
This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.
So if we have the ‘integrated control systems’, what would we be able to do?
We could:
  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.
Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.
This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)
I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.
So what does this API need to be ?
First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.
This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.
A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.
Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.
These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…
(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)
It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.
All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…
So what information should this API be able to feed ?
Most information would be nice to have in near real time :
  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))
(all if and where applicable and available)
The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.
So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?
More:

Original Article: http://datacenterpulse.org/blogs/jan.wiersma/where_open_datacenter_facility_api