Measuring Power Consumption in the Data Center

by Derek Schwartz and Daniel Skutelsky

“How do I know how much power my equipment is consuming? Is there a way to provide that information so that my projects can be approved by management and they can see the ROI?”

The market is defined by two types of firms: There are those that have been measuring power and temperature for years and taken aggressive action on the data they have collected, analyzed and leveraged to change design, build and operational dogmas; then there’s everyone else. If you fall into the latter category, you have many decisions to make. Gathering data center power and temperature data today is not an inexpensive exercise, so options range from extrapolation based on either a limited data set or vendor-provided data about components, partial measurement adding functionality as your requirements change or the implementation of a data center infrastructure management (DCIM) tool.  The Green Data Center Alliance (GDCA) has met with hundreds of firms all along this continuum; universally, those that measure learn that preconceived notions about their data centers are mostly wrong and the opportunities for remediation numerous.

Measuring the ROI of DCIM

Tying measurement to return on investment (ROI) requires deep analysis through rudimentary tools (smart power distribution units, temperature sensors) or true DCIM packages that provide holistic views of various elements with some level of correlation to highlight opportunities. The in-between is computational fluid dynamics (CFD) modeling which can be useful for point projects.

The GDCA is currently evaluating 20 DCIM software, hardware and measurement tools that can do just that. Again, the trick becomes how far your organization wishes to go: How much do you need to see to take action; what infrastructure do you or don’t you have; and what will it take for a given package to be implemented?  So if you are justifying server replacement, the Green Grid has developed a tool specifically for ROI. Virtualization VMware is ready; there is no need to spend a dime. But if you are doing airflow remediation or contemplating serious operational changes such as establishing time-based service-level agreements with end-users based on actual utilization to enable zoning, you need to do your homework. Extrapolation in the above scenario is dangerous, since so little market data exists on anything beyond server/ME equipment replacement and virtualization.

Does the Spend on DCIM Justify the ROI?

So the real questions are how serious is your organization about optimization; what type of projects do you see yourself taking action on; what do you need to know (i.e.. power only, temperature only, real-time server data, real-time mechanical and electrical data, etc.); will these actions happen during the lifespan of your facility; and does the spend on DCIM justify the ROI of your projects in the coming years?

Our experience says if you do not intend to do many projects, then extrapolation is the way to go, as more and more end-user case studies are being published and can be cited when justifying ROI to executives outside IT. Any type of airflow corrections should at a minimum include CFD or lower-end DCIM tools that capture real-time temperature data.  And for firms seeking operational excellence and market stewardship, we suggest robust packages that align with our Data Center Energy Efficiency Framework  (DCEEF) and provide data about IT, facilities and engineering, process, governance, and finance under one umbrella.

The question should not be how I measure but what I measure, where, how frequently and how far I intend to go with regard to data center optimization.


Bios:

Derek Schwartz, Principal

Derek has been immersed in IT operations and data center design, build, relocation and retrofit for more than 15 years. He has worked with dozens of Fortune 500 firms, solving their complex infrastructure, real estate and IT needs. In 2008, he started the Green Data Center Alliance (GDCA), which boasts more than 8,000 members globally.

The GDCA was recently awarded New York State Energy Research and Development’s (NYSERDA) Innovation Grant to create a model for improved energy efficiency in data centers regardless of size or sophistication. Derek spearheaded the collaborative effort to get more than 80 individuals and 40 firms to add to the Best Practices Model for Data Center Energy Efficiency. In January, two Fortune 20 firms and one Global 100 firm began assessments against the model. Schwartz will oversee the program and work with these participants to gather data and provide each with comprehensive remediation plans based on their position against DCEEF.

Derek is a veteran of the United States Air Force, holds a Bachelor of Education and is currently pursuing a Six Sigma Green Belt. He is also the owner of Technology Deployment Solutions (www.tech-deploy.com).

Daniel Skutelsky, Director Data Center Optimization

Daniel has been optimizing technology operations for nearly a decade. As a Six Sigma Black Belt, Daniel has assessed and optimized dozens of operations.  Recently, under contract with the New York State Energy Research and Development Authority (NYSERDA), he began developing a tool to help organizations save power in the data center. Based on the energy-saving practices observed in leading organizations today, the Data Center Energy Efficiency Framework (DCEEF) can be used by anyone to create a multiyear road map to savings.

Daniel has an MBA, a Bachelor of Science (mathematics), a Six Sigma Black Belt, and numerous technical certifications. He is also a co-founder of the Green Data Center Alliance (www.greendca.org/).

Most Recent Related Stories

Energy Reuse Factor: A Key Metric for Sustainable Data Center Operations Read More
Unpacking the Intricacies of Data Center Workflow Management Read More
Revolutionizing Data Center Hosting Services for Secure Federal Agencies Read More