Data Center 2012
Data Center 2012
Data Center 2012
High Density
Data Centers
Case Studies
and Best Practices
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This publication was prepared in cooperation with TC 9.9, Mission Critical Facilities,
Technology Spaces, and Electronic Equipment.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ISBN 978-1-933742-32-8
ASHRAE has compiled this publication with care, but ASHRAE has not investigated, and ASHRAE expressly
disclaims any duty to investigate, any product, service, process, procedure, design, or the like that may be
described herein. The appearance of any technical data or editorial material in this publication does not constitute
endorsement, warranty, or guaranty by ASHRAE of any product, service, process, procedure, design, or the like.
ASHRAE does not warrant that the information in the publication is free of errors, and ASHRAE does not neces-
sarily agree with any statement or opinion in this publication. The entire risk of the use of any information in
this publication is assumed by the user.
No part of this book may be reproduced without permission in writing from ASHRAE, except by a reviewer who
may quote brief passages or reproduce illustrations in a review with appropriate credit; nor may any part of this
book be reproduced, stored in a retrieval system, or transmitted in any way or by any means—electronic, photo-
copying, recording, or other—without permission in writing from ASHRAE.
____________________________________________
Library of Congress Cataloging-in-Publication Data
TH4311.H54 2008
725'.23--dc22
2008006301
ASHRAE STAFF
SPECIAL PUBLICATIONS PUBLISHING SERVICES
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Contents
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vii
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2 Raised-Access Floor Case Studies . . . . . . . . . . . . . . . . . . 7
2.1 Raised-Access Floor with Perimeter Modular CRACs . . . . . . . . . . 7
2.1.1 Case Study 1—National Center for
Environmental Prediction (NCEP) . . . . . . . . . . . . . . . . . . . 7
2.1.2 Case Study 2—IBM Test Facility
in Poughkeepsie (2004) . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.3 Case Study 3—San Diego
Supercomputer Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.4 Case Study 4—IBM Test Facility
in Poughkeepsie (2005) . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.2 Raised-Access Floor with AHUs on Subfloor. . . . . . . . . . . . . . . . 66
2.2.1 Case Study 5—Lawrence Livermore
National Lab Data Center . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.3 Raised-Access Floor Supply/Ceiling Return . . . . . . . . . . . . . . . . 74
2.3.1 Case Study 6—NYC Financial Services
Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.4 Raised-Access Floor with Heat Exchangers
Adjacent to Server Racks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.4.1 Case Study 7—Georgia Institute of Technology
Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.4.2 Case Study 8—Hewlett-Packard Richardson DataCool™
Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
vi ⏐ Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Acknowledgments
The information in this book was produced with the help and support of the
corporations, academic institutions, and organizations listed below:
American Power Conversion JDA Consulting Engineers
Bellsouth Lawrence Berkeley National Lab
Cedar Sinai Medical Center Microsoft
Citigroup Minick Engineering
Cushman and Wakefield Opengate Data Systems
DLB Associates Consulting Engineers Oracle
Emerson Panduit
Georgia Institute of Technology Rumsey Engineers
Hewlett Packard San Diego Supercomputer Center
IBM Ted Jacob Engineering Group
In addition TC9.9 would like to thank Will Dahlmeier, Mike Mangan, and Don
Beaty of DLB Associates, Inc., and the following people for substantial contribu-
tions to the individual case studies in the book:
Case 1: Thanks to Bob Wasilewski and Tom Juliano of DLB Associates, Inc.
for aiding in the measurements, and thanks to Donna Upright and Duane Oetjen
vii
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
viii ⏐ Acknowledgments
of IBM for their complete support in performing these measurements while the
data center was in full operation.
Case 2: Thanks to Bob Wasilewski and Tom Juliano of DLB Associates, Inc.,
for their aid in the measurements, and thanks to Donna Upright and Duane Oetjen
for their complete support in performing these measurements while the data center
was in full operation.
Case 3: Thanks to Dr. Roger Schmidt, Dr. Hendrik Hamann, Dane Miller, and
Harald Zettl for their help with collection and interpretation of the data. The char-
acterization and paper would not have been possible without their contribution. The
author also thanks the staff of SDSC, especially Mike Datte and Jeff Filliez, for their
full cooperation in allowing IBM to study the data center and publish the results.
Case 4: Thanks to Donna Upright and Duane Oetjen for their complete support
in performing these measurements in Poughkeepsie while the data center was in full
operation.
Case 5: Thanks to Steve Holt at Livermore for helping with the data collection
at the Livermore site.
Case 6: Thanks to Gerhard Haub and Patrick Calcagno of Cushman and Wake-
field and Ryan Meadows and Ed Koplin of JDA Consulting Engineers for their assis-
tance with field measurements and analysis.
Case 7: Thanks to Dr. Bartosz Ilkowski at the Georgia Institute of Technology,
Bret Lehman of IBM, Stephen Peet of BellSouth, and Steve Battenfeld of Minick
Engineering for their contributions to both the design and documentation of this high
density case study. Thanks also to Sam Toas and Rhonda Johnson of Panduit for their
contributions in the areas of temperature measurement and results documentation.
Case 8: Thanks to Jonathan Lomas for field data collection and Scott Buell for
CFD modeling and graphics.
Case 9: Thanks to Lennart Stahl of Emerson, a great collaborator on the project,
and to the supporting executives, Paul Perez of HP and Thomas Bjarnemark of
Emerson. Thanks also to Chandrakant Patel, Cullen Bash, and Roy Zeighami for all
their technical support and contributions.
Case 10: Thanks to Dr. Mukesh Khattar, Mitch Martin, Stephen Metcalf, and
Keith Ward of Oracle for conceptual design and implementation of the hot-air
containment at the rack level, which permitted use of variable-speed drives on the
CRACs while preventing mixing of hot and cold air in the data floor; Mark Redmond
of Ted Jacob Engineering Group for system engineering and specifications; and
Mark Germagian, formerly of Wright Line and now with Opengate Data Systems,
for building server racks with hot-air containment.
Case 11: Thanks to Bill Tschudi of Lawrence Berkeley National Laboratory and
Peter Rumsey of Rumsey Engineers for contributing this case study, which was
performed as part of a broader project for the California Energy Commission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Introduction
Data centers and telecommunications rooms that house datacom equipment are
becoming increasingly more difficult to adequately cool. This is a result of IT manu-
facturers increasing datacom performance year after year at the cost of increased
heat dissipation. Even though performance has, in general, increased at a more rapid
rate than power, the power required and the resulting heat dissipated by the datacom
equipment has increased to a level that is putting a strain on data centers. However,
in the struggle to improve the thermal management characteristics of data centers it
is sometimes important to assess today’s data center designs. The objective of this
book is to provide a series of case studies of high density data centers and a range
of ventilation schemes that demonstrate how loads can be cooled using a number of
different approaches.
This introductory chapter describes the various ventilation designs most often
employed within data centers. This book does not present an exhaustive resource for
existing ventilation schemes but, rather, a wide variety of schemes commonly used
in the industry. Seven primary ventilation schemes are outlined here. In the case
studies that follow, each of these will be shown with detailed measurements of
airflow, power, and temperature.
The most common ventilation design for data centers is the raised-access floor
supply, with racks arranged in a cold-aisle/hot-aisle layout (see Figure 1.1). The
chilled-air supply enters the room through perforated tiles in the raised floor, wash-
ing the fronts of the racks facing the cold aisle. The hot exhaust air from the racks
then migrates back to the inlet of the computer room air-conditioning units (CRACs)
typically located on the perimeter of the data center.
Another version of the raised-access floor supply is shown in Figure 1.2, where
the air-handling units (AHUs) are located beneath the floor containing the IT equip-
ment. One of the key advantages of this arrangement is that all the mechanical equip-
ment is located in a room separate from the IT equipment, which allows for ease of
maintenance.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
2⏐ Introduction
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
4⏐ Introduction
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
To further separate the hot exhaust air from the racks and the cold air in the cold
aisle, Figure 1.6 shows a ducted hot-air exhaust back to the CRACs. The ducting is an
effective separation technique but needs to be closely integrated with the IT racks.
Figure 1.7 shows a non-raised-access floor design in which supply chilled air
enters from the ceiling, and hot-air exhaust from the racks returns to the CRACs
located on the perimeter of the data center.
The following chapters provide case studies of operational data centers with the
ventilation schemes described above. The purpose of these studies is to be as
complete as possible in deploying measured thermal parameters of the data center.
For most cases, these include inlet air temperatures to each rack; airflow rates from
perforated tiles and other openings, such as cable openings; power measurements of
all elements within the data center, including IT equipment, lighting, and power
distribution units (PDUs); and, finally, a complete set of geometric parameters that
describe the data center, including rack layouts, raised-access floor heights (if a floor
is raised), ceiling heights, and any other information pertinent to the thermal
management of the data center. Although thermal modeling is not the subject of this
book, one could theoretically use the data from these case studies to construct a ther-
mal model of the data center and then make comparisons.
The format for displaying the data is the same for most of the case studies so that
comparisons can be made between the various ventilation schemes as desired.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
6⏐ Introduction
The two chapters devoted to case studies cover raised-access floors and non-raised-
access floors. Since most measurements are for raised floors, several subcategories are
provided in which case studies are shown for each subcategory. Chapter 4 is devoted to
best practices for each of the primary categories of ventilation schemes—raised-access
and non-raised-access floors. These guidelines are based on technical papers published
mostly within the last five years, and also from the case studies presented herein.
Chapter 5 provides an expanded list of references and a bibliography with addi-
tional, related materials. Chapter 6 provides a useful glossary of common terms used
throughout this book.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
2
Raised-Access Floor
Case Studies
2.1 RAISED-ACCESS FLOOR WITH PERIMETER MODULAR CRACs
The heat dissipated by large servers and switching equipment has reached
levels that make it very difficult to cool these systems. Some of the highest-powered
systems dissipate up to 4000 W/ft2 (43,600 W/m2) based on the equipment foot-
print. Systems that dissipate this amount of heat and are clustered together within
a data center present significant cooling challenges. This case study describes the
thermal profile of a 74 × 84 ft (22.4 × 25.4 m) data center and the measurement tech-
niques employed to fully capture the detailed thermal environment. In a portion of
the data center (48 × 56 ft [14.5 × 17.0 m]) that encompasses the servers, the heat
flux is 170 W/ft2 (1850 W/m2). Most racks within this area dissipated 6.8 kW, while
a couple dissipated upward of 28 kW. Detailed measurements were taken of elec-
tronic equipment power usage, perforated floor tile airflow, cable cut-out airflow,
CRAC airflow, temperatures and power usage, and electronic equipment inlet air
temperatures. In addition to these measurements, the physical features of the data
center were recorded.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
a raised-access floor height of 17 in. (431.8 mm). Seven operational CRACs and
six operational PDUs are located around the perimeter of the room. Potential expan-
sion is anticipated, and additional PDUs and CRACs (shown as “Future”) are shown
in Figure 2.1. The servers are located in a cold-aisle/hot-aisle arrangement with aisle
widths of approximately 4 ft (1.2 m [two floor tiles]). The cold aisles were populated
with 25% open tiles with dampers removed on all the tiles. A cold aisle showing the
rows of racks is displayed in Figure 2.2. In addition, underfloor blockages occurred
beneath the raised-access floor. These were either insulated chilled-water pipes, as
shown in Figure 2.3, or cabling located beneath the server equipment.
When the data center was first populated with equipment, high rack inlet air
temperatures were measured at a number of rack locations. The problem was that the
perimeter between the raised-access floor and subfloor was not blocked off, and the
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
chilled air from the CRACs was exiting to other portions of the building (this data
center was centrally located among other raised-access floor data and office spaces).
In addition, the total heat dissipation of the electronic equipment in the room exceeded
the sensible cooling capacity of the CRAC. Based on these problems, an additional
CRAC was installed, and the entire perimeter of the region between the raised-access
floor and subfloor was enclosed. (Although before-and-after results will not be
presented in this paper, the resulting flow increased by about 50%, and the rack inlet
temperatures decreased on average by about 5°C (41°F) with these two changes.)
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
MEASUREMENT TOOLS
The airflow through the perforated floor tiles, cable cut-outs, and CRACs was
measured with a velometer. The unit was calibrated on a wind tunnel, and all
measurements were adjusted based on the calibration (the velometer measured
approximately 4% low for the range of airflows measured).
The temperatures were measured with a high-accuracy handheld digital ther-
mometer using a type T thermocouple. Since temperature differences and not abso-
lute temperatures were of most importance, the meter was not calibrated.
Temperature difference errors were estimated to be ±1.0°C (±1.8°F), resulting
primarily from cycling of the CRACs.
Voltage and current measurements of the CRACs were made with a handheld
voltmeter and a current clamp-on meter. Manufacturer data reported the error in
these devices as ±0.7% and ±2%, respectively.
The input power of several racks was measured by connecting a laptop with
custom software to the server.
Power Measurements
Measurements of input power to the data center were made at several levels in
order to provide a good estimate of the input power of various types of equipment.
The overall data center input power was taken from the PDUs located around the
perimeter of room (see Figure 2.1). These provided input power only to the elec-
tronic equipment within the room, not including the CRACs or lighting. Each PDU
provided input power in kW, the results of which are shown in Table 2.1. The total
input power of all the data processing equipment was 483 kW. All electronic racks
operated with a power factor correction of nearly 1.0 with three-phase 208 V input
to the racks.
The CRACs and lighting also contribute to the overall heat load in the data
center. Power dissipation of each CRAC was estimated based on voltage and current
measurements, as shown in Table 2.1. (Since alternating current motors operate the
blowers, a power factor of 0.9 was assumed in estimating the total power dissipated.)
Of course, some of the energy put into the CRACs is devoted to fan power. With a
pressure drop of approximately 1.2 in. (30.5 mm) of water across the coil and an
average airflow rate of 10,060 cfm (279.2 m3/min) (see the “Airflow Measurements”
section below), the estimated fan power was approximately 1400 W per CRAC.
Lighting was provided by fluorescent fixtures rated at 64 W each. With 78 fixtures
in the data center, the resulting total lighting heat load was 5000 W. Therefore, the
total heat load in the data center was 520 kW. The maximum error in this total heat
load value is estimated to be ±2%.
The more difficult determination was the distribution of power among the indi-
vidual racks. Given the time constraints (six hours to do all measurements), power
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
usage of each rack could not be measured. The focus of the measurements was thus
placed on key components crucial to determine the distribution of power in the data
center. The majority of the servers (51 racks of IBM model p690) were essentially the
same and dissipated similar heat; therefore, measuring a couple of these systems was
deemed acceptable. Also, there were two fully configured IBM model p655 racks that
dissipated a very high heat load. Given that there were only two of these systems, they
were both scheduled for measurement. However, since communications could not be
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
established with these racks, the same rack configurations in another lab were
measured. The results for the p690s were 7.2 and 6.6 kW, while two p655s were 26.4
and 27.3 kW. These power measurements were made by a power tool connected
directly to the racks. The breakdown of the data-processing rack input powers is
shown in Table 2.1. For rack input powers that were not measured, estimates were
obtained from the power profile of each. The rack input powers are displayed as a bar
graph in Figures 2.4–2.8, with each rack power bar somewhat in line with the physical
location of the racks shown in the picture at the top of the figure.
Airflow Measurements
The airflow from the perforated floor tiles was measured with a velometer. This
flow tool fit exactly over one perforated tile, so it provided an excellent tool for
rapidly profiling the flow throughout the data center. Measured flows from each tile
or cable cut-out were very stable, varying <10 cfm (0.28 m3/min). The measured
flow rates from each perforated tile are shown in Figures 2.4–2.8. As in the display
of the rack powers, the airflows from the perforated tiles and cable cut-outs are
aligned with the physical layout of the perforated tiles and cable cut-outs shown in
the picture at the top of the figure.
Measuring the cable cut-out airflows could not be achieved directly since it
would have been impossible to locate the flow tool directly over the cable cut-out,
which is within the footprint of the rack at the rear. However, an alternative method
was proposed and verified to obtain an estimate of the airflow through a cable cut-
out (or other openings throughout the data center, such as within the PDU footprint,
etc.). First, a cable cut-out was completely blocked with foam materials. Next, a tile
with a cut-out of the shape of the cable was provided and placed in the opening near-
est the cable opening. To mimic the blockage contributed by the cables, a piece of
tape was used to block a portion of the simulated cut-out. The flow through the simu-
lated tile was then measured with the flow tool. Then the blockage was removed
from the cable cut-out and the airflow through the simulated tile was repeated.
Comparison of these flows, with and without blockage of the cable cut-out, showed
no discernible difference in the flow rates measured. Therefore, all cable cut-outs
were measured with this modified tile without blocking the actual cable cut-out,
which saved a significant amount of time. Some cut-outs were a different size, so the
simulated tile was adjusted to approximate the actual opening. The airflow measure-
ments from the cable cut-outs are also shown in Figures 2.4–2.8. Similar to the rack
power results, the airflows from the cable and PDU openings are somewhat aligned
with the physical layout shown at the top of each figure.
The overall airflow of the data center was estimated based on all the measure-
ments of airflow from the perforated floor tiles and cable cut-outs. The sum of all
these measurements was 67,167 cfm (1864.7 m3/min), after adjusting for the cali-
bration in the flowmeter. Again, the perimeter of the data center below the raised-
access floor was completely enclosed such that negligible air escaped the room. One
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
additional area of airflow not accounted for was the leakage of air that occurred
between the perforated tiles. Tate Access (2004) states that a typical air leakage is 0.69
cfm/ft2 (0.21 m3/min/m2) at a static pressure of 0.05 in. (1.27 mm) of water. Since
modeling of the flow beneath the floor showed underfloor static pressures of approx-
imately 0.03 in. (0.76 mm) of water, the air leakage was estimated to be 0.50 cfm/ft2
(0.15 m3/min/m2). As the data center had an area of 6200 ft2 (568.5 m2), the total air
leakage was estimated to be 3200 cfm (88.9 m3/min). This resulted in a total estimated
data center airflow rate of 70,400 cfm (1955 m3/min). No leakage occurred at the
walls around the perimeter of the data center since the side walls rested on top of the
raised-access floor and not on the subfloor. The error in the total data center flow rate
was estimated to be 4.5% (10% for cable cut-outs and 4% for perforated floor tiles).
The final airflow measurement focused on the flow through each CRAC. These
flows are difficult to measure, as there is no easy way to obtain the airflow exhaust-
ing the CRACs beneath the floor since it is nonuniform and highly turbulent. Nor
is it easy to obtain the total flow entering the CRAC at its inlet. The estimated flow
from the CRACs is 10,060 cfm (279.3 m3/min) (70,400 cfm per seven CRACs or
279.3 m3/min per CRAC). However, each CRAC displayed some differences in
flow due to backpressures under the floor, varying filter cleanliness, variations in
the unit, etc., so the velometer was employed to obtain an estimate of the variation
in airflow between the units. Basically, a grid of flow measurements was taken
across the face of the CRAC and used to tabulate the average velocity. From the
average velocity and area, the total airflow into the CRAC was computed. First, the
velometer was placed above the CRAC at the inlet to measure a portion of the flow
entering the unit. Obviously the flow is not the same, since the opening into the top
of the tool (14 × 14 in.) (355.6 × 355.6 mm) is not the same as the opening in the
bottom (23 × 23 in.) (584.2 × 584.2 mm). Therefore, the measured airflow will be
less than the actual airflow. Measurements at six locations were averaged and multi-
plied by the area of the inlet of the CRAC, and the measured flow was about two-
thirds of the actual flow (computational fluid dynamics [CFD] modeling of the tool
at the inlet proved this to be correct). These measurements were then used to
proportion the flow among CRACs, the estimates of which are shown in Figure 2.1.
These CRAC airflows do not agree with what is provided by the manufacturer
in their catalog for this model unit (12,400 cfm, as stated in the manufacturer’s cata-
log). There are two possible reasons for this. First, each CRAC had a turning vane
and side baffle to direct airflow beneath the raised-access floor and, second, each
CRAC had slightly dirty filters. So the reduced flow rate of 19%, compared to the
catalog value, is not unreasonable.
Temperature Measurements
Temperatures were measured for the supply air flowing from the perforated floor
tiles, the air inlet into the racks at a height of 68.9 in. (1750 mm), and the return air
to the CRACs. The temperature differences between the raised-access floor supply air
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
temperature and the temperature of the air entering the rack at a height of 68.9 in.
(1750 mm) is shown in Figures 2.4–2.8. Temperatures were taken in accordance with
ASHRAE (2004) guidelines—2 in. (50 mm) in front of the covers. The graph of the
rise in inlet air temperatures (air temperature at a height of 68.9 in. [1750 mm] minus
temperature exhaust from the perforated tiles) for each rack is shown at the bottom
of each figure. The temperature bars in the graphs are somewhat in line with the rack
positions shown at the top of the figure. This provides the reader an interpretation of
the spatial temperature distribution, given the flow distribution and the physical
layout of the equipment. Return air temperatures at each CRAC were also measured
(see Figure 2.1).
Thermal Profiles
Several correlations between airflow and air temperature rise at the racks were
attempted. None seemed to show any promise; however, several observations can be
made. First, airflow through the cable cut-outs is significant—approximately one-
third of the total flow is from the cable cut-outs and other openings on the floor.
Although the flow from the cable cut-outs can provide some cooling, the analysis by
Schmidt and Cruz (2002a) shows this is not the best use of the supply air from the
raised-access floor. If the hot exhaust air exiting the racks is drawn back into the inlet
of the rack, then the chilled air exhausting the cable cut-outs cools this exhaust air
before it enters into the front of the racks. Second, the rack inlet air temperatures for
the racks in rows 5 and 9 and located at columns DD, EE, and FF in Figures 2.4–2.8
show relative high temperatures. This may be due to air from the hot aisles returning
to the nearby CACU-43 and 45 units and causing the racks to draw in some of this
return exhaust air from the racks.
The supply air from the perforated floor tiles adjacent to the IBM p690 racks,
along with the corresponding temperature rise to the inlet of the racks, is depicted in
Figure 2.9. The average airflow rate from perforated tiles adjacent to the p690 racks
is 342 cfm (9.68 m3/min). The average temperature rise to the inlet of the racks at a
height of 68.9 in. (1750 mm) from the raised-access floor is 18.9°F (10.5°C). In addi-
tion, the average airflow rate from the cable cut-outs for the p690s is 210 cfm (5.94
m3/min). Finally, the chilled airflow rates stated here should be compared to the
airflow rate through the rack, which is approximately 1100 cfm (31.1 m3/min). Since
each rack extends in width 1.25 tiles, one can assume the total flow to the face of a
rack is an average of 427 cfm (12.09 m3/min) (1.25 × 342 cfm or 1.25 × 9.7 m3/min).
Adding this flow to the cable cut-out flow of 210 cfm (5.94 m3/min), the total chilled
airflow devoted to one rack is approximately 637 cfm (18.0 m3/min). It is obvious that
the airflow rate from the floor in the region of the rack is much less than the flow rate
through the rack; however, the air inlet temperatures to the racks are still well within
the rack temperature specs (50°F–89.6°F [10°C–32°C]). If the cable cut-out and
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
perforated tile adjacent to the rack were combined to provide the chilled airflow to
the rack, then the temperature rise with this flow and a rack heat load of 6.8 kW would
be 34.2°F (19°C). Since the temperature rise to the top of the rack averages 18.9°F
(10.5°C), the temperature plume exhausting the rear of the racks must mix with the
air in a larger region, thereby damping the temperature rise. This suggests there is
enough mixing in the room to bring the exhaust temperatures down, even though the
local chilled airflow rates are much lower than what might be considered adequate.
Energy Balance
To confirm the accuracy of the data via an energy balance, a calculation of
airflow using measured power input and temperature difference was compared to
the actual measured airflow to see if they matched. The overall data center airflow,
as measured from the perforated floor tiles and other openings, was 70,400 cfm
(1993 m3/min), with an estimated accuracy of 4.5%. Also, the error in the temper-
ature difference of 22.4°F(12.45°C), as measured at the underfloor and the return
to the CRACs, was estimated to be 10%. Finally, the error of 534.4 kW in the overall
heat dissipation in the data center was estimated to be 2%. Using the CRAC average
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
temperature difference and the overall heat load for the data center (534.4 kW), the
expected data center flow rate is 77,262 cfm (2187 m3/min) ±10.2% (69,381–
85,142 cfm [1964–2410 m3/min]). This calculation compares favorably to the
measured value of 70,400 cfm (1993 m3/min) ±4.5% (67,230–73,570 cfm [1903–
2083 m3/min]). From this examination of measured data center flow rate compared
to calculated flow rate based on an energy balance, measurements are found to be
within reason, and closure on the energy balance is obtained.
SUMMARY
A methodology was outlined and described with the aid of measurements
collected from a high density data center. The components of the measurements
include the following:
• Power
PDU
Racks
Lighting
CRAC
• Temperatures
Rack inlet air
CRAC return air
Supply air from perforated floor tiles
• Airflows
CRAC
Perforated floor tiles
Cable cut-outs
PDU openings
These measurements allowed for a detailed thermal profile of the data center.
The airflow through the perforated tiles was approximately one-quarter to one-
half of the airflow through the rack, in order to maintain system inlet air temperatures
in accordance with the specifications. The perforated tile airflow rate plus the cable
cut-out flow rate was about one-half to two-thirds of the airflow through the rack.
Even though the room flow rate was adequate to cool the overall heat load in the data
center, and the local flow rate adjacent to the racks did not appear adequate, the
convection currents that occurred at room level were adequate to bring local air
temperatures for the high-powered racks within the temperature specifications.
Airflows through the CRACs were quite low compared to published values
from the manufacturer. The average airflow through a CRAC was 10,400 cfm
(294.5 m3/min) compared to the manufacturer’s catalog value of 12,400 cfm (351.1
m3/min), a 19% reduction. Although unverified, it is the authors’ opinion that this
reduction was a result of turning vanes and side baffles installed on the CRAC units.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study describes a specific set of measurements from a high density data
center in order to provide details of the thermal profile. In addition, the data collection
techniques described can be used as a basis for collecting data from other data
centers or telecom rooms and provide a presentation format in which to display the
information.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers,
Inc.
Schmidt, R., and E. Cruz. 2002a. Raised floor computer data center: Effect on
rack inlet temperatures of exiting both the hot and cold aisle. Proceedings of
Itherm Conference 2002, San Diego, CA, pp. 580–94.
Tate Access Floors. 2004. Controlling air leakage from raised access floor cavities.
Technical Bulletin #216, Tate Access Floors, Inc., Jessup, MD.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The IBM development lab is located in Poughkeepsie, NY. The data center is
used to configure and test large clusters of systems before shipping to a customer.
From time to time, test clusters afford an opportunity for measurements. The
systems in this data center are located on a raised-access floor in an area 76 × 98 ft
(23.2 × 29.9 m). A plan view of the data center indicating the location of the elec-
tronic equipment, CRACs, and perforated floor tiles is shown in Figure 2.10.
The area is part of a larger data center not shown in the figure. In order to thermally
isolate this area from other parts of the data center for the purposes of this study, several
temporary partitions were installed. Plastic sheets were draped from the ceiling tiles
to the floor along row 49, shown in Figure 2.10. This concentrated the heat load from
the systems to the CRACs located in the area of interest. To further separate the area,
the underfloor plenum was examined for openings around the perimeter of the portion
of the room. Openings underneath the floor (cable and pipe openings, etc.) were also
closed off. The above-floor and below-floor blockages are shown in Figure 2.11.
Two primary server racks populated this data center: the IBM model 7040
(p690) accounted for 79 systems and the IBM model 7039 (p655) accounted for 23
systems. The other systems were a mix of switching, communication, and storage
equipment. The key classes of equipment are highlighted in Figure 2.12. The ceiling
height, as measured from the raised-access floor to the ceiling, was 108 in. (2.74 m),
with a raised-access floor height of 28 in. (0.7 m). Twelve operational CRACs
(Liberty model FH740C) were located around the perimeter of the room. The servers
were located in a cold-aisle/hot-aisle arrangement, with aisle widths of approxi-
mately 4 ft (1.2 m). The cold aisles were populated with 40% open tiles (Macaccess
model AL series). A hot aisle displaying the rows of racks is shown in Figure 2.13.
Since this was a test environment, no covers were installed on any of the server racks.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
MEASUREMENT TOOLS
The airflow through the perforated floor tiles, cable cut-outs, and CRACs was
measured with a velometer. The unit was calibrated in a wind tunnel, and all
measurements were adjusted based on the calibration (the velometer was measuring
approximately 4% and 7% low for the range of airflows measured on the 500 and
1000 cfm scales, respectively). In addition to this correction, the reading of the
velometer was also corrected for the reduction in airflow caused by the unit’s flow
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.11 Isolation of the measurement area under floor partitions (top)
and above floor partitions (bottom).
impedance. The unit was modeled using a CFD software package, where the result-
ing correction is given by the following:
The results presented in the remainder of this case study include the above corrections.
The temperatures were measured with a handheld Omega HH23 meter using a
type T thermocouple. Since temperature differences and not absolute temperatures
were most important, the meter was not calibrated, although the error in the ther-
mocouple and instrument was estimated to be ±1.8°F (±1.0°C). Temperature differ-
ence errors were estimated to be ±1.8°F (±1.0°C), resulting primarily from cycling
of the CRACs.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The input power of most of the racks was measured by connecting a laptop with
custom software to the server. For comparison, another tool was connected inline to
the cable used to power the server. The comparisons between the two power
measurement tools were within 3% for the two systems measured. For ease of
measurement, the tool that was connected to the input power of the rack was used
throughout the data center.
Power Measurements
A good estimate of the total power dissipated in the data center was determined
since the power of most of the racks was measured. A summary of all the powers are
summarized in Table 2.2. As shown, the electronic equipment dissipated 1088 kW.
All electronic racks operate with a power factor correction of nearly 1.0 with three-
phase 208 V input to the racks. For those few racks that could not be measured, the
power profile of each piece of equipment was used to estimate power dissipation.
These racks only contributed 8% to the overall power dissipated by the electronic
equipment. The error in the estimated rack powers was considered to be ±15%, while
those of the measured rack powers were ±3%, giving a combined error of ±3%. The
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
rack input powers are displayed as bar graphs in Figures 2.14–2.18. Due to space limi-
tations, not all regions are shown, but the ones shown are representative.
The CRACs and lighting also contribute to the overall heat load in the data center.
Since none of the CRACs had humidification or reheat capabilities, the only power
expenditure was that of a 10 hp blower. This power value was used for all the CRACs
except three that exhausted significantly less airflow. For these, the pumping power
(based on measured CRAC flow described in the next section) was used to estimate
the power dissipated by these units. These data are also summarized in Table 2.2.
Lighting was provided by T12 Mark III-Energy Saver fluorescent fixtures rated at 101
W each. With 94 fixtures in this portion of the data center, the resulting total lighting
heat load was 9.5 kW. Therefore, the total heat load in the data center was 1170 kW.
The maximum error in the total heat load value was estimated to be ±3%.
Airflow Measurements
The airflow from the perforated floor tiles was measured with a velometer. This
flow tool fits exactly over one perforated tile and provides an excellent means of
rapidly profiling the flow throughout the data center. Measured flows from each tile
or cable cut-out were very stable, varying by <10 cfm (0.28 m3/min). The measured
flow rates from each perforated tile are also shown in Figures 2.14–2.18.
Measuring the cable cut-out airflows could not be achieved directly since it was
impossible to position the flow tool directly over the cable cut-out due to cable
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Temperature Measurements
Temperatures were measured of the supply air exhausting the perforated floor
tiles, the air inlet into the racks at a height of 68.9 in. (1750 mm), and the return air
to the CRACs. The temperature differences between the raised-access floor supply
air and the air entering the rack at a height of 68.9 in. (1750 mm) is shown in Figures
2.14–2.18. These temperatures were taken 2 in. (50 mm) in front of the covers in
accordance with ASHRAE (2004) guidelines. The graph of the rise in inlet air
temperatures for each rack is shown at the bottom of each figure. Return air temper-
atures at the each CRAC were also measured (see Table 2.3).
Thermal Profiles
As in Section 2.1.1, the airflow through the cable cut-outs is significant—
approximately half of the total flow was from cable cut-outs and leakage through the
tiles. Although the flow from the cable cut-outs provided some cooling, the analysis
by Schmidt and Cruz (2002) shows this is not the best use of the supply air from the
raised-access floor. If the hot exhaust air exiting the racks is drawn back into the inlet
of the rack, then the chilled air exhausting the cable cut-outs cools this exhaust air
before it enters into the front of the racks. It is more efficient if no air exhausts from
the cable openings and all is exhausted from the cold aisle.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The supply air from the perforated floor tiles adjacent to the IBM p690 racks,
along with the corresponding temperature rise to the inlet of the racks, is depicted
in Figure 2.19. The average airflow rate from perforated tiles adjacent to the p690
racks was 516 cfm (14.6 m3/min) for the data center. The average temperature rise
to the inlet of the racks at a height of 68.9 in. (1750 mm) from the raised-access floor
was 15.8°F (8.8°C). The tile flow rate for the p690s in Schmidt (2004) was 342 cfm
(9.5 m3/min), and the temperature rise was 18.9°F (10.5°C). Finally, the chilled
airflow rates stated here should be compared to the airflow rate through the rack,
which was approximately 1050 cfm (29.7 m3/min). Since each rack extended 1.25
tiles in width, the total flow to the face of a rack was assumed to be an average of
645 cfm (18.3 m3/min) (1.25 × 516 cfm or 1.25 × 13.6 m3/min). Since the airflow
rate through each p690 rack was approximately 1050 cfm (29.7 m3/min), it is obvi-
ous that the airflow rate from the perforated tile adjacent to the rack was less than
the flow through the rack. Since the temperature rise to the top of the rack averaged
15.8°F (8.8°C), the temperature plume exhausting the rear of the racks mixed with
the air in a larger region, thereby damping the temperature rise. This suggests that
there was enough mixing in the room to bring the exhaust temperatures down even
though the local airflow rates were much lower than what might be considered
adequate. This was similar to the results obtained from Schmidt (2004). The average
temperature rise for the p655 racks (average power 19 kW) was 20.3°F (11.3°C), as
displayed in Figure 2.19.
The entire data center had a heat flux of 157 W/ft2 (14.6 W/m2), based on the
total area. However, in one area (portions of regions 1 and 2), as highlighted in
Figure 2.20, the heat flux was 512 W/ft2 (5500 W/m2). Since the racks were approx-
imately 1.25 tiles wide, the total perforated tile airflow associated with the rack in
this area was approximately 750 cfm (21.2 m3/min), which is about one-third of the
airflow rate through the rack. The airflow rate through the p655 racks was approx-
imately 2400 cfm (68 m3/min). Schmidt (2004) noted that if the perforated tile
airflow was in the range of one-quarter to one-half of the rack airflow, with under-
floor air temperatures between 50°F–59°F (10°C–15°C), then the air inlet temper-
ature could be met. The cable cut-out airflow rate in this area associated with each
rack was approximately 800 cfm (22.6 m3/min). This meant that the total airflow
in the region of a rack in this area was approximately 1600 cfm, compared to 2400
cfm (68.0 m3/min) through the rack. Again, one-half to two-thirds of the airflow
was needed to satisfy the rack inlet temperature requirements as long as the total
heat load at the facility level was accommodated.
Energy Balance
To confirm the accuracy of the data, two different comparison of the power were
made. One was based on the mass flow rate through the CRACs and their associated
temperature difference across the unit. The second was the measured total power of
all the systems within the data center. These comparisons are shown in Tables 2.2–
2.3. The supply temperatures from the CRACs were based on an average of the ten
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
lowest exit air temperatures from the perforated tiles with the assumption that these
were representative of the supply temperature from the CRAC units. The measured
power shown in Table 2.3 is 1169.6 kW ±3% (1134–1204 kW). The calculated
power based on the mass flow rate and the temperature difference across the CRAC
units is 1053.2 kW ±9.8% (950–1156 kW). A comparison of these values shows
excellent agreement and closure on an energy balance.
SUMMARY
A methodology similar to Schmidt’s (2004) was outlined and described with the
aid of measurements collected from a high density data center. The components of
the measurements included the following:
• Power
Racks
Lighting
CRAC
• Temperatures
Rack inlet air
CRAC return air
Supply air from perforated floor tiles
• Airflows
CRAC
Perforated floor tiles
Cable cut-outs
These measurements allowed a detailed thermal profile of the data center.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
As in Schmidt (2004), the flow rate from the perforated tile in front of the rack is
less than that flowing through the rack. In most cases, this is one-quarter to one-half
of the flow rate through the rack. This was the case for both the p690 racks, which dissi-
pated approximately 7 kW (with a rack flow rate of 1100 cfm [31.1 m3/min]) and the
p655, which dissipated approximately 19 kW (with a rack flow rate of 2400 cfm (68
m3/min]). If the cable cut-outs are included, the combined flow was in the range of one-
half to two-thirds of the rack flow rate, similar to that reported in Schmidt (2004). Even
though the local flow rate adjacent to the racks did not appear adequate, the convection
currents that occurred at the room level were sufficient to bring the local air temper-
atures for the high-powered racks within the temperature specifications.
An energy balance was performed using two different methods, and both
showed excellent agreement.
This case study describes a specific set of measurements from a high density data
center in order to provide details of the thermal profile. In addition, the data collection
techniques described can be used as a basis for collecting data from other data
centers or telecom rooms and provide a presentation format in which to display the
information.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers,
Inc.
Schmidt, R.R. 2004. Thermal profile of a high density data center—Methodology
to thermally characterize a data center. ASHRAE Transactions 110(2):635–42.
Schmidt, R., and E. Cruz. 2002. Raised floor computer data center: Effect on
rack inlet temperatures of exiting both the hot and cold aisle. Proceedings of
Itherm Conference 2002, San Diego, CA, pp. 580–94.
Tate Access Floors. 2004. Controlling air leakage from raised access floor cavities.
Technical Bulletin #216, Tate Access Floors, Inc., Jessup, MD.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study characterizes an 11,490 ft2 (1067 m2) high density data center,
focusing on a zone with heat dissipation greater than 8 kW, up to 26 kW per frame.
To gain insight into the operational health of the data center, a survey is conducted
that measures power consumption, airflow, and temperature. The results are
analyzed using a CFD model. The methodology used here is similar to that used in
the NCEP data center case study (Schmidt 2004).
MEASUREMENT TOOLS
The airflow through the perforated panels is measured with an Alnor Balometer
capture hood. Because the capture hood obstructs the airflow from the perforated
panel, the cfm readings must be adjusted by a correction factor. However, the Alnor
capture hood has a built-in back pressure compensation feature via a flap that
accounts for the flow impedance of the capture hood. The feature is used for every
measured perforated panel. Measurement accuracy based on the manufacturer’s
specification sheet is ±3% of the airflow reading. Cable cut-outs are measured with
a wind vane with anemometer air-velocity meter. Measurement accuracy, based on
the manufacturer’s data sheet, is ±2% of the velocity reading. Temperature measure-
ments of the IT equipment air intakes are made with an Omega model HH23 digital
thermometer with a type T thermocouple. Measurement accuracy, based on the
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Power Measurements
The heat load of the data center, including the IT equipment, CRACs, and lighting,
is collected on-site with the help of SDSC personnel. The IT equipment in the data
center is supplied by 208 volts of alternating current (VAC), either three-phase or line-
to-line, and 120 VAC line-to-neutral. The CRACs are supplied with 480 VAC line-to-
line. Table 2.4 shows a breakdown of the power dissipation. The CRACs have a range
of heat output that varies significantly. The heat output of the IT equipment is calculated
from the sum of the amperages multiplied by the associated mains connection. The
result is volt-amps (VA), but a power factor of 0.95 is provided by SDSC to determine
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
the watts, as most IT equipment has active mitigation to comply with the harmonics
emissions standard 61000-3-2 (IEC 2005). However, it is important to note that all the
IBM p690 and p655 three-phase servers, which are a significant load in the data center
and are not within the scope of 61000-3-2, include active mitigation and have a power
factor very close to one. Table 2.5 shows a breakdown of the zone with the high density
servers. The numbers in parentheses are the quantity of IBM server racks. The maxi-
mum error in the total heat load is estimated to be ±5%.
Several p655s are measured directly at the frame through a communications
interface on the bulk power system. The measurements compare favorably to the
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
data from the power panels, although the power panels are consistently higher by
<5%. This result is most likely due to measurement error with uncalibrated instru-
ments and distribution losses.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
is spot checked for perimeter openings in the raised-access floor cavity. None are
revealed, although an exhaustive review was not done. Finally, the distributed leakage
area of air between perforated panels is estimated to be 0.2%, based on the average
width of the gaps between floor panels. Typical percent leakage area values can range
from 0.1% to 0.2% but can be as high as 0.35% (Radmehr et al. 2005). The model is
run several times with different distributed leakage values to arrive at an acceptable
comparison between measured and predicted values. However, the leakage may be
larger than assumed since the data center has been in existence for some time and
there are areas of high static pressure as high as 0.094 in. wg. All in all, there is only
a 1.7% difference in total cfm between measured and simulated airflows. Figure 2.25
shows the measured versus modeled results on a perforated panel-by-panel basis. The
average static pressure for the entire raised floor is 0.048 in. wg.
Initial runs of the model showed some wide excursions because of dampers that
are used on some of the perforated panels. Since the measured airflow rates are avail-
able and the pressure drop usually varies as the square of the airflow rate, it is possi-
ble to refine the model. The additional airflow resistance is entered in the polynomial
expression for a particular perforated panel, and the model is rerun. The airflow from
the perforated panels in row AA of Figure 2.21, closest to CRACs CCU8–12, show
the largest discrepancy between measured and modeled values, but perforated
panels in adjacent rows show good correlation.
A comparison of the cut-outs shows good correlation between measured and
modeled data for openings less than 25 in.2 (161 cm2). The percent difference is
<15%. The larger cut-out comparison shows that the predicted volumetric airflow is
significantly higher than the measured airflow. The reason for the discrepancy most
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
likely has to do with the measurement technique. Regardless of the size of the cut-
out, three measurements are taken to obtain data in the shortest amount of time.
Because the wind vane area is 5 in.2 (32 cm2), it is very difficult to obtain a repre-
sentative linear airflow with three readings of a large opening, for example 100 in.2
(645 cm2), before integrating over the area.
Temperature Measurements
Temperatures are logged at the air intakes of the frames in the outlined zone
shown in Figure 2.21. A single temperature reading is captured for each frame at a
height of 68.9 in. (1750 mm) in accordance with ASHRAE guidelines (ASHRAE
2004) of 2 in. (50 mm) in front of the covers. The temperature of the air exiting each
perforated panel supply airflow is measured by the capture hood and recorded. The
return air to the CRACs could not be easily measured because of the return duct
extensions. Instead, the CRAC sensor data are recorded.
High-Density Analysis. The gross density of air-conditioning capacity is 138 W/
ft2 (1485 W/m2) and the current heat load density is 132 W/ft2 (1421 W/m2). Although
this information is not particularly useful in the overall operational health evaluation,
as it does not indicate airflow distribution problems, hot spots, etc., gross W/ft2 is
commonly used by real-estate operations personnel in figuring data center costs. Table
2.7 gives a view of the total airflow in the data center with estimated accuracy.
Table 2.7 shows that approximately half of the airflow in the data center is from
cut-outs and leakage. Although there may be some benefit to cooling of IT equipment,
prior studies (Schmidt and Cruz 2002a) show that the cut-out air is heated by the IT
exhaust air before returning to the IT air intakes.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The frame power, airflow rates, and inlet temperatures for the zone outlined in
Figure 2.21 are examined similarly to those in case study 1. The zone is divided into
three sections for further study. The environmental characteristics are shown in
Figures 2.26–2.30.
Figures 2.26–2.30 show that the air intake temperatures are within or below the
ASHRAE Class 1 recommended dry-bulb temperature range of 68°F–76°F (20°C–
25°C), regardless of frame power consumption and airflow rate through the IT
equipment.
Table 2.8 shows a correlation between average perforated panel supply airflow,
cut-out airflow, frame airflow, and frame temperature rise for a section and subsec-
tion. The average perforated pane supply airflow comes from the panels directly in
front of the frames for this analysis, even though a cold aisle may be 4 ft (101.6 mm)
wide. For comparison to the average frame airflow, the average perforated panel
airflow is adjusted for the frame width of 1.25 panels. The frame airflows are as
follows: 2960 cfm (83.8 m3/min) for the p655, 1100 cfm (31.1 m3/min) for the p690,
and 800 cfm (22.7 m3/min) for the switch frames. The frame temperature rise is
calculated based on the difference between the average temperature of the air exiting
a given row of perforated panels and the aggregate return air temperature of the
CRAC sensors within the vicinity of that row.
Table 2.8 shows that the airflow rates from the perforated panels in front of the
frames are much less than the frame airflow rates. Despite the imbalance in airflow,
air-intake temperatures are within or below ASHRAE Class 1 recommendations, as
shown in Figures 2.26–2.30. If the perforated panel and cut-out adjacent to a frame
are combined to provide the airflow, the calculated temperature rise based on aver-
age frame power is shown in Table 2.9.
The calculated temperature rise in Table 2.9 is higher than the actual tempera-
ture rise in Table 2.8 for each section. Therefore, the conclusion is the same as that
for case study 1: chilled air within the data center migrates from low-density to high-
density zones, as the local chilled airflow rates for the high-density frames are much
lower than what might be considered adequate.
Airflow Comparison. An airflow comparison is made between the actual airflow,
and a calculated airflow from the data center measured heat load and temperature differ-
ence. The actual airflow, based on perforated panel measurements, limited cable cut-out
measurements, and modeling, is 185,700 cfm (5258 m3/min), with an estimated accu-
racy of 10% derived from various simulation results. The actual airflow range is
167,130–204,270 cfm (4732–5784 m3/min). The average temperature difference from
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
perforated panel temperature measurements and CRAC sensors is 27°F (15°C), with an
estimated error of 10%. The total heat load accuracy is estimated to be ±5%. The calcu-
lated airflow is 177,664 cfm (5031 m3/min), with a range of 153,437–207,205 cfm
(4345–5867 m3/min). There is good overlap in the actual and calculated airflow ranges;
therefore, the comparison of the data are validated.
SUMMARY
This case study presents a detailed characterization of a high density data center.
On-site measurements of heat load, airflow, and temperature are measured and
collected to study the data center. These parameters are used to build a CFD model
and run simulations to provide detail on parameters, such as cut-out and leakage
airflow, which could not be captured during the study either because of time or phys-
ical constraints. The model is validated based on the comparison between perforated
panel measurements and the results of the model, as the total airflow percent differ-
ence is only 2%. An airflow comparison also confirms that the actual airflow and
calculated airflow are in agreement.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The high density area of the data center with IBM equipment is studied. The key
IT equipment health indicator is the inlet temperature, which is within or below the
ASHRAE Class 1 guideline, even though the sum of the perforated panel and cut-
out airflow rates are up to two-thirds less than the frame airflow rates. While the local
conditions do not seem adequate to satisfy and maintain the air intake temperatures
of the high-powered frames, the overall data center flow rate can handle the total data
center heat load. However, as more high-density equipment is installed, there is a risk
of locally elevated inlet temperatures, even though there may be sufficient cooling
capacity. The conclusions for case study 3 and case study 1 are similar.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
IEC. 2005. Electromagnetic compatibility (EMC)—Part 3-2: Limits—Limits for
harmonic current emissions (equipment input current ≤16A per phase). Inter-
national Electrotechnical Commission, Geneva, Switzerland.
Radmehr, A., R. Schmidt, K. Karki, and S. Patankar. 2005. Distributed leakage
flow in raised-floor data centers. Proceedings of InterPACK 2005, San Fran-
cisco, California, pp. 401–08.
Richardson, G. 2001. Traversing for accuracy in a rectangular duct. Associated Air
Balance Council Tab Journal Summer 2001:20–27.
Schmidt, R.R. 2004. Thermal profile of a high density data center—Methodology
to thermally characterize a data center. ASHRAE Transactions 110(2):635–42.
Schmidt, R., and E. Cruz. 2002a. Raised floor computer data center: Effect on
rack inlet temperatures of exiting both the hot and cold aisle. Proceedings of
Itherm Conference 2002, San Diego, CA, pp. 580–94.
TileFlow, trademark of Innovative Research, Inc., Plymouth, Minnesota.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study and case study 5 describes the thermal profile of a high-perfor-
mance computing cluster located, at different times, in two different data centers with
different thermal profiles. The high-performance Advanced Simulation and Comput-
ing (ASC) cluster, developed and manufactured by IBM, is code named ASC Purple.
It is the world’s third-fastest supercomputer, operating at a peak performance of 77.8
TFlop/s. ASC Purple, which employs IBM pSeries p575, model 9118, contains more
than 12,000 processors, 50 terabytes of memory, and 2 petabytes of globally acces-
sible disk space. The cluster was first tested in the IBM development lab in Pough-
keepsie, New York, and then shipped to Lawrence Livermore National Labs in
Livermore, California, where it was installed to support our national security mission.
Detailed measurements were taken in both data centers of electronic equipment
power usage, perforated floor tile airflow, cable cut-out airflow, CRAC airflow, and
electronic equipment inlet air temperatures. In addition to these measurements, the
physical features of the data center were recorded. Results showed that heat fluxes of
700 W/ft2 (7535 W/m2) could be achieved while still maintaining rack inlet air
temperatures within specifications. However, in some areas of the Poughkeepsie data
center, there were zones that did exceed the equipment inlet air temperature specifi-
cations by a significant amount. These areas will be highlighted and reasons given
why these areas failed to meet the criteria. Those areas of the cluster in Poughkeepsie
that did not meet the temperature criteria were well within the temperature limits at
the Livermore installation. Based on the results from these two data centers, neces-
sary and sufficient criteria are outlined for IT racks to achieve inlet air temperatures
that meet the manufacturers’ temperature specifications.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
was examined for openings around the perimeter of this portion of the room; where
open areas existed, cardboard was used to block the airflow path. This transferred the
entire heat load from the systems located in the area of interest to the CRACs.
The cluster was populated primarily with a high-powered server rack, p575
model 9118. The other systems were a mix of switching, communication, and stor-
age equipment. The key classes of equipment are highlighted in Figures 2.32–2.33.
The ceiling height as measured from the raised-access floor to the ceiling is 108 in.
(2.74 m), with a raised-access floor height of 28 in. (0.7 m). Twenty-six operational
Liebert model FH740C CRACs were primarily located around the perimeter of the
room, as shown in Figure 2.31. The servers are located in a cold-aisle/hot-aisle
arrangement with aisle widths of approximately 4 ft (1.2 m) and 6 ft (1.8 m). The cold
aisles were populated with 40% open tiles. Since this is a test environment, no covers
were installed on any of the server racks.
MEASUREMENT TOOLS
The airflow through the perforated floor tiles, cable cut-outs, and CRACs was
measured with a commercial velometer. The unit was calibrated on a wind tunnel
and all measurements were adjusted accordingly (the velometer measured approx-
imately 4% and 7% low for the range of airflows measured with the 500 and 1000
cfm scales, respectively). In addition to this correction, the reading of the velometer
also needed to be corrected for the reduction in airflow caused by the flow imped-
ance of the unit. The unit was modeled using a CFD software package; the resulting
correction for the unit is given by the following:
Corrected flow for 500 cfm scale (cfm) =
1.11 × measured flow rate (cfm) – 16.6
Corrected flow for 1000 cfm scale (cfm) =
1.14 × measured flow rate (cfm) – 16.6
The results presented in the remainder of this case study include the above corrections.
The temperatures were measured with a handheld meter using a type T ther-
mocouple. Since temperature differences and not absolute temperatures were of
most importance, the meter was not calibrated. The error in the thermocouple and
instrument was estimated to be ±1.8°F (±1.0°C). Temperature difference errors were
estimated to be ±1.8°F (±1.0°C), resulting primarily from cycling of the CRACs.
The input power of most of the racks was measured by connecting a laptop with
custom software to the server to monitor the input power to the rack. For comparison,
another tool was connected inline to the machine’s power cable. The comparisons
between the two power measurement tools were within 3% for the two systems
measured. For ease of measurement, the tool that could be connected to the input
power of the rack was used throughout the data center.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Power Measurements
Since the power of all the server racks (p575) was measured, a good estimate
of the total power dissipated from the cluster could be achieved. A summary of all
the rack powers grouped by regions are summarized in Table 2.10. The electronic
equipment dissipated 2.9 MW (with 1180 nodes operating). One node was a 2U
server with up to 12 nodes installed in a rack. All electronic racks operated with a
power factor correction of nearly 1.0 with three-phase 208 V input to the racks. For
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
those few racks that could not be measured, the power profile of each piece of equip-
ment was used to estimate the power dissipation.
The CRACs and lighting also contributed to the overall heat load in the data
center. Since none of the CRACs had humidification or reheat capabilities, the only
power was that for a 10 hp (7457 W) blower. This power was used for all the CRACs,
which resulted in a total heat load of 154,700 W. Lighting was provided by fluores-
cent fixtures rated at 101 W each. With 194 fixtures in this portion of the data center,
the resulting total lighting heat load was 19,600 W. Therefore, the total heat load in
this portion of the data center was 3.1 MW. The maximum error in this total heat load
value was estimated to be ±3%.
Airflow Measurements
The airflow from the perforated floor tiles was measured with a velometer. This
flow tool fit exactly over one perforated tile and provided an excellent means of
rapidly profiling the flow throughout the data center. Measured flows from each tile
or cable cut-out was very stable, varying by less than 10 cfm (0.28 m3/min).
Measuring the cable cut-out airflows could not be done directly since it was
impossible to position the flow tool exactly over the cable cut-out due to cable inter-
ference. To eliminate this interference, a short, rectangular, open-ended box with a
small cut-out on the side for the cables to enter was installed on the bottom of the
instrument. Measurements were made of a select number of cable cut-outs that
included all the various sizes distributed throughout the raised-access floor.
The overall flow of the data center was based on all the measurements, the sum
of which was 267,305 cfm (7569 m3/min) (after adjusting for the calibration in the
flowmeter). Two additional areas of flow not accounted for are the leakage of air that
occurs between the perforated tiles and the leakage through openings around the
perimeter between the subfloor and raised-access floor. Based on an energy balance,
this leakage flow could be as high as 20% of the total flow from the CRACs.
Attempts to seal the perimeter with cardboard and tape were made, but many cables
penetrated the perimeter, allowing for air to escape into other parts of the data center.
Temperature Measurements
Air temperature measurements were made at the perforated floor tiles, at the
inlet to the racks at a height of 68.9 in. (1750 mm), and at the inlet to the CRACs.
The temperature differences between the raised-access floor exhaust air temperature
and the temperature of the air entering the rack at a height of 68.9 in. (50 mm) are
shown in Table 2.10. Temperatures were taken in accordance with ASHRAE guide-
lines (ASHRAE 2004)—2 in. (50 mm) in front of the rack covers.
The measurement point near the top of the rack (1750 mm) was selected for
several reasons. Both modeling and experimental data suggest that any hot spots can
be captured with measurement of this point near the top of the rack. Recirculation
of hot air over the top of the rack will definitely be captured by measuring this point.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Secondly, only one point of measurement was selected in order to minimize the time
needed to collect data and still capture key thermal characteristics of the data center.
Thermal Profiles
Table 2.10 shows the thermal profile of each region displayed in Figures 2.32
and 2.33. The heat fluxes in each of these regions were very high, ranging from 327–
720 W/ft2 (3520–7750 W/m2), corresponding to the very high rack powers shown
in column 5 of Table 2.10. The average rack heat load varied from approximately
24,000–27,000 W. These rack powers resulted in rack heat densities (based on rack
footprint) of approximately 1800–2000 W/ft2 (19,000–22,000 W/m2).
In Figure 2.32, regions 2–4 and 6–8 (shown between the vertical dotted lines)
have the most racks that exceed the inlet air temperature specification. In these
regions, 73% of the racks exceeded the inlet air temperature specification (59°F–90°F
or 10°C–32°C for the p575 racks). If one considers that the racks in these regions are
cooled by the CRACs adjacent to these regions (CRACs A–D and I–L), then one can
perform a simple energy balance to determine if adequate cooling exists. The power
dissipated by the racks in these six regions is 382 tons (1,342,000 W), while the air-
conditioning capability in this region is only 240 tons (844,000 W)— quite a heat load
imbalance. Regions 11–14 and half of region 10 in Figure 2.33 (shown between the
vertical dotted lines) displayed similar conditions, where 67% of the racks in these
regions exceeded the rack inlet air temperature specifications. The power dissipated
by the racks in these regions was 230 tons (809,000 W), while the air-conditioning
capability of CRACs P, Q, T, S, W, and V was only 180 tons (633,000 W).
As stated in Schmidt and Iyengar (2005a), gaps between racks that dissipate
high heat loads and have high flow rates may cause the hot exhaust air to travel into
the next cold aisle and be ingested into those racks. For almost the entire ASC Purple
layout in the Poughkeepsie system, there were gaps between the racks that allowed
hot air from the exhaust of the racks to blow into the adjacent cold aisle and be
ingested into the air inlets of the racks. In order to verify this effect, the racks with
gaps between them in region 13 were tested with and without blockages. To create
the blockages, foam core sheets were cut to size and placed between each of the
racks. Air inlet temperatures of the top nodes in four racks were measured before and
after the blockages. These results are displayed in Figure 2.34. Each of the racks
showed lower air inlet temperatures by as much as 10.8°F (6°C).
Schmidt (2004) and Schmidt and Iyengar (2005a, 2005b) stated that if the flow
rate from tiles directly in front of a rack was one-quarter to one-half (0.25–0.5) of the
rack flow rate, and the underfloor exhaust air temperature was 59°F (15°C) or less,
then conditions would support the manufacturers’ air temperature specification.
From a review of all the regions reported in this study, the tile flow rates did fall within
this range and, in many cases, the underfloor exhaust temperature was 59°F (15°C)
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
or below, but the rack inlet air temperature was not within the temperature specifi-
cation of 50°F–89.6°F (10°C–32°C). The two reasons for regions exceeding rack
temperature specifications, even though the perforated tile flow rate appeared to be
adequate, were that the heat load capabilities of the nearby CRACs were not suffi-
cient, and gaps existed between racks, allowing hot air to infiltrate into the cold aisle.
The regions (1,5,9, and half of region 10 adjacent to region 9) have racks that
are mostly within the rack temperature specifications. What can be said about these
regions as opposed to those that exceed rack temperature specifications is that these
regions have adequate CRAC cooling capacity. That is not true for the other regions.
SUMMARY
Based on the test results of the ASC Purple cluster in the IBM data center, the
following can be considered as necessary and sufficient conditions for achieving
rack inlet air temperature within specifications:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
Schmidt, R.R. 2004. Thermal profile of a high density data center—Methodology
to thermally characterize a data center. ASHRAE Transactions 110(2):635–42.
Schmidt, R.R., and M. Iyengar. 2005a. Effect of data center layout on rack inlet air
temperatures. Proceedings of InterPACK 2005, San Francisco, California.
Schmidt, R.R., and M. Iyengar. 2005b. Thermal profile of a high density data cen-
ter. ASHRAE Transactions 111(2):765–77.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study reports the results of thermal profile testing of IBM’s ASC
Purple, installed at Lawrence Livermore National Lab (see case study 4 for thermal
profile results for the Poughkeepsie installation). Those areas of the cluster in
Poughkeepsie that did not meet the temperature criteria were well within the temper-
ature limits at the Livermore installation. Based on the results from these two data
centers, necessary and sufficient criteria are outlined for IT racks to achieve inlet air
temperatures that meet the manufacturers’ temperature specifications.
After ASC Purple was tested in the Poughkeepsie facility and all components
were verified as working properly to the customer’s specifications, the cluster of
racks was disassembled, packed, and shipped to Lawrence Livermore National Labs
in Livermore, California, where it was reassembled. The layout of the data center is
shown in Figure 2.35. Once the cluster was installed, approximately half the data
center was populated with IT equipment, as shown in the figure. The computer room
is 126 × 193 ft (38.4 × 58.8 m) and the ASC Purple cluster occupied a space 84 ×
88 ft (25.6 × 26.8 m). The computer room has a capability of supporting 7.5 MW of
IT equipment, while the ASC Purple system generates approximately 3.2 MW (max
of 1280 nodes operating) of electrical load when fully powered (not all the racks
were powered in the Poughkeepsie lab because of power/temperature constraints).
The racks were arranged in cold-aisle/hot-aisle formation, as shown in the figure,
and installed on a raised-access floor. The IT equipment is installed on the second
floor of a two-story data center. The lower level is 15 ft high and all the air-handling
equipment is installed in this area, as shown in Figure 2.36. This area is slightly pres-
surized and supplies air through multiple openings at the top of the floor. Directly
above these large (10 × 16 ft or 3.05 × 4.9 m) multiple openings is a traditional raised-
access floor, 4 ft high, where air is distributed to the perforated tiles arranged
throughout the data center. The chilled air is distributed to the perforated tiles, and
the hot exhaust air is then returned to the AHUs through openings in the ceiling and
in the side walls. The height of the ceiling in the computer room is 10 ft (3.05 m).
There were three types of perforated tiles—25% open, 56% open with dampers, and
56% open without dampers. The general airflow path in the data center is depicted
in Figure 2.36, along with a picture of the mechanical utility room and a picture of
ASC Purple installed on the raised-access floor.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
MEASUREMENT TOOLS
A flow measurement tool similar to that used at the Poughkeepsie data center
(case study 4) was used at Livermore. The tool included calibration features, so no
adjustments were needed to correct the measured results.
The temperatures were measured with a handheld thermocouple meter using a type
T thermocouple. Since temperature differences and not absolute temperatures were of
most importance, the meter was not calibrated, although the error in the thermocouple
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.38 Livermore data center ASC Purple perforated tile flow.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
was chosen, then facility operators would push their chilled-air temperatures even
lower, thereby wasting energy and not really improving IT equipment reliability.
Although measurements of flows from all the perforated tiles were not possible,
measurements were taken for as many perforated tiles as time allowed. The perforated
tile locations for ASC Purple’s area are shown in Figure 2.35. All the perforated tiles
directly in front of the racks were measured. Those in the middle or at the ends of rows
were not measured. The perforated tile flow rates for the initial and final layouts are
shown in Figure 2.38. Although the tile layout remained almost the same, the perforated
tile openings were modified. Some 25% open tiles were replaced with 56% open tiles
with dampers, and some tiles with dampers were removed to provide even higher flows.
In reviewing Figure 2.38, more tiles had flow at the upper range for the final layout, even
though two AHUs were turned off, which indicates that the floor provided a high imped-
ance to the flow, and opening up the tiles allowed more flow through. The total flow
through the tiles measured in the initial layout was 145,000 cfm (4,106 m3/min), while
in the final layout it was 177,000 cfm (5,012 m3/min). More of the air was directed
where it was needed, even though the total flow from the AHUs decreased. Obviously
the flow from the perforated tiles measured is much less than the total flow delivered
by the operating AHUs. Airflow from the AHUs passes through cable openings, cracks
between tiles, and to areas of another data center space with IT equipment and associ-
ated perforated tiles.
The airflow from the perforated tiles in front of the p575 racks was examined,
and the average of all these flows was 690 cfm (19.5 m3/min). However, the tiles in
front of the p575 racks took up an area of more than one tile. In most of the areas
measured, the middle tile in the cold aisle, which was three tiles wide, also contrib-
uted to the air ingested into the rack. Taking this middle tile into consideration, the
total flow at the front of the p575 rack was approximately 1290 cfm (36.5 m3/min).
As in prior measurements of data centers, a rule of thumb was established for what
amount of flow was required from a floor tile compared to the flow through a rack.
The nominal flow through the rack was approximately 2800 cfm (79.3 m3/min). The
flow through the tiles immediately in front of the rack (1290 cfm or 36.5 m3/min)
was slightly less than half the flow through the rack (2800 cfm or 79.3 m3/min),
which again confirmed this rule of thumb. The remaining flow is provided elsewhere
in the data center through the mixing of hot and cold air from cable openings, leakage
from the small cracks between tiles, etc.
Examination of the Livermore data center finds good agreement with the crite-
ria established in case study 4 for how to achieve rack inlet temperatures within the
air temperature inlet specification:
1. The flow rate from the tiles in front of the racks falls within the range of one-
quarter to one-half (0.25–0.5) the flow rate through the rack.
2. The exhaust chilled-air temperature is below 59°F (15°C).
3. The air-conditioning capability is more than adequate given the heat load. The
eight operating AHUs (final state after the optimization of the raised-access
floor) had a combined capacity of 1160 tons (4,079,000 W), while the power
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Finally, measurements were taken of the acoustical levels within the data center.
The measurements were taken with a Quest Technologies Noise Pro series DLX dosim-
eter. Sound pressure levels were measured and are shown in the layout of Figure 2.35.
In the middle of the rows, the sound pressure levels ranged from 85–87.1 dB, while on
the ends of the rows they ranged from 75–83.1 dB. The AHUs located in the mechanical
utility room on the lower level displayed values of 72–74.5 dB. If the sound pressure
levels exceed 90 dB, then hearing protection is required (USDOL 2006). Similar
requirements are enforced by the European Union (ERA 2003).
SUMMARY
From the results of both the Livermore production environment and the Pough-
keepsie test environment (see case study 4), a clearer picture evolves of the condi-
tions necessary to ensure that the rack inlet temperature will meet the manufacturers
temperature conditions. Based on prior studies (Schmidt 2004; Schmidt and Iyengar
2005a, 2005b) and the results at Livermore, it is necessary that the flow exiting the
floor tiles in front of the rack be one-quarter to one-half (0.25–0.5) of the rack flow
rate at a supply temperature of 59°F (15°C) or less in order to meet the inlet air
temperature for the rack. However, as the results on the test floor in Poughkeepsie
indicate, these are not the only required conditions. Even though the one-quarter to
one-half rule was met in Poughkeepsie and the supply chilled-air temperature was
less than 59°F (15°C) in most cases, many rack inlet air temperatures were much
higher than the rack specification. Two conditions existed that contributed to these
rack inlet air temperatures exceeding specifications. One was the gaps between the
racks that allowed high-velocity hot exhaust air to be blown into the cold aisles,
upsetting the cold-aisle/hot-aisle arrangement. The second was that the CRACs in
the region of the high-powered racks were not sufficient to handle the high heat load.
Based on these two data centers’ results and the results of Schmidt (2004) and
Schmidt and Iyengar (2005a, 2005b), the necessary and sufficient conditions
required to maintain the inlet air temperature into a rack within the manufacturers’
specifications are four-fold:
1. One-quarter to one-half (0.25–0.5) of the rack flow rate exhausting from the
perforated tiles directly in front of the rack
2. Supply chilled-air temperature below 59° (15°C) (or higher if the chilled-air
exhausting from the tiles is higher)
3. No gaps between racks that allow high-powered/high-flow hot exhaust air to be
blown into the next cold aisle
4. Equal or greater air-conditioning capability in the region of the high-powered racks
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study describes a specific set of measurements from a high density data
center in order to provide details of the thermal profile. In addition, the data collection
techniques described can be used as a basis for collecting data from other data centers
or telecom rooms and provide a presentation format in which to display the information.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
ERA. 2003. European workplace noise directive. Directive 2003/10/EC, European
Rotogravure Association, Munich, Germany.
Schmidt, R.R. 2004. Thermal profile of a high density data center—Methodology
to thermally characterize a data center. ASHRAE Transactions 110(2):635–42.
Schmidt, R.R., and M. Iyengar. 2005a. Effect of data center layout on rack inlet air
temperatures. Proceedings of InterPACK 2005, San Francisco, California.
Schmidt, R.R., and M. Iyengar. 2005b. Thermal profile of a high density data cen-
ter. ASHRAE Transactions 111(2):765–77.
USDOL. 2006. Occupational noise exposure. Hearing Conservation Standard 29
CFR 1910.95, United States Department of Labor, Occupational Safety and
Health Administration, Washington, DC.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
All data centers are designed to an average heat density expressed in W/ft2. This
represents the total power dissipated in the room compared to the total size of the
space. In practice, data centers are not populated according to this average heat
density but are actually pockets of various heat densities above and below the aver-
age. The data center cooling system is designed for the average heat density but has
the ability to cool areas of the data center at levels higher than average. Understand-
ing how the air-conditioning system performs gives the data center operator the
opportunity to coordinate placement of the highest-power hardware loads in areas
where maximum cooling can be achieved. This case study looks at a data center
where the hardware placement was coordinated so that the highest power loads were
placed where the highest static pressure in the raised-access floor was expected. The
cooling design is a raised-access floor supply with a ceiling plenum return.
This case study describes the thermal performance of a 4770 ft2 (445 m2) data
center that measures approximately 53 × 90 ft (16.2 × 27.5 m). The data center houses
a variety of servers from small rack-mounted servers to large stand-alone partitioned
servers and blade servers. The space includes the network cable frame and storage
area network (SAN) infrastructure required for the production applications residing
on the servers. The space, formerly a tape library, was converted to a data center in
2003. The load has grown from an initial level of 45 W/ft2 (481 W/m2) to the current
level of 105 W/ft 2(1123 W/m2). The heat density in the room is not uniform; some
cabinet loads are less than 2 kW, while some large servers exceed 20 kW. More details
are given later in the “Measurement and Results” section.
To quantify the thermal performance of the data center, temperature measure-
ments were taken at various locations. Most important, readings were taken at the
hardware inlet as recommended in Thermal Guidelines for Data Processing Environ-
ments (ASHRAE 2004). In addition, supply and return air temperature and airflow
measurements were recorded.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
CRACs in an area expected to have low static pressure. The design intent was to posi-
tion the low heat density equipment where the low static pressure was expected and
the high heat density equipment where the highest static pressure was expected. The
cabinets are installed in a traditional hot-aisle/cold-aisle arrangement with a front-
to-front and back-to-back placement. The computer room air-handling units
(CRAHs) are arranged parallel to the server cabinets and the rows of freestanding
hardware. All CRAHs are down-flow type, supplying air to a 24 in. plenum, and are
ducted directly to the ceiling for the return air path. The depth of the ceiling plenum
is approximately 5 ft. Two units are placed against the wall at the west end of the data
center, two more units are placed approximately 34 ft away facing the same direc-
tion, and two more units are placed an additional 46 ft away at the east end of the
data center facing the direction opposite the other four units. The total of six CRAHs
includes one for redundancy. The electrical equipment in the data center consists of
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
eight PDUs and two remote power panels (RPPs). The RPPs are fed from the PDUs
and are required to provide additional circuit positions. Control panels for pre-action
sprinklers, a smoke-detection system, and building management system (BMS)
monitoring are located on the perimeter walls. Lighting is pendant hung, which
minimizes the impact to the ceiling grid and is coordinated with the hardware layout.
The ceiling consists of standard tiles and return-air grilles. The position of the return-
air grilles can be changed to accommodate changing heat loads.
MEASUREMENT TOOLS
The instruments in Table 2.11 were used to record data in this case study.
Power Measurements
Measurements of the input power to the data center were made at several levels
to provide an accurate representation of the power of the various types of equipment
distributed throughout the space. The power to all the hardware is provided through
eight PDUs located in the room. Each PDU has a digital display indicating the current,
voltage, and power in kW. These readings are listed in the attached Table 2.12. The
total power provided by the PDUs is 504 kW.
The server cabinets each have two three-phase power strips to provide power to
the servers. Each power strip is connected to a different uninterruptable power
supply (UPS) source, providing a redundant supply for all dual cord loads. The
power strips also have digital displays of the current on each phase. A survey of the
power strips provides the load in each of the cabinets (see Table 2.13). The cabinets
are separated into north and south groups for this analysis, corresponding to their
relationship to the air-conditioning placement. The loads range from 1.7–5.52 kW
per cabinet. Table 2.14 lists the loads in each of the 84 cabinets.
The freestanding servers and blades do not have digital displays of the power
used by each device. Individual circuit measurements were taken by using a Fluke
Mutimeter 87-3 at the PDUs and RPPs. The freestanding devices are also grouped
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
as north and south for this analysis. The loads range from 2.0–23.1 kW. Table 2.15
lists the loads of the freestanding devices.
The server cabinets employ fans in the top to assist in removing heat from the serv-
ers and discharge it vertically toward the ceiling return plenum. The power for the fans
is supplied by the power strips in the cabinets; therefore, it is included in the total power
indicated for each cabinet. The fans are variable speed with the control determined by
a preselected and adjustable discharge temperature. The power draw for each fan is 0.5
amps at full speed. The maximum power used for fans in each cabinet is 0.35 kW.
The total power read at the devices and at the power strips in the server cabinets
was 483.6 kW, or 95.9% of the power readings at the PDUs. Figure 2.40 illustrates
the total power in each row of equipment.
Airflow Measurements
Airflow was measured at the CRAHs and around the data center space. In addi-
tion, static pressure was measured across the data center floor to confirm the design
intent of locating the lowest loads in the area of lowest static pressure and the highest
loads in the area of highest static pressure.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Airflow in the data center begins with the CRAH. In this case, the supply air is
discharged directly into the raised-access floor plenum from the downflow design
of the CRAHs. The actual airflow was measured at 105,000 cfm and compared to
the published specification for the units. Table 2.16 lists the results.
The cooling air leaves the underfloor plenum one of two ways: through
controlled openings and through uncontrolled openings. In this data center, the
controlled openings are the bases of the server cabinets with the adjustable grom-
mets and the high volume perforated tiles in the areas of the freestanding servers. The
uncontrolled openings include cable openings at the hardware, power cable open-
ings under PDUs, and leakage around raised-access floor tiles.
The supply of air through the controlled openings is dependent on the static
pressure present. Static pressure was measured across the data center floor in a grid
pattern, and the results are shown in Figure 2.41.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
With the static pressure known, the flow through the controlled openings was
predictable.
Performance curves for the high volumetric airflow perforated tiles used near
the large freestanding servers showed an expected volume of 945 cfm (26.8 m3/min)
for each tile for the range of static pressure present.
The adjustable grommets in the bases of the server cabinets were tested at various
opening positions and various static pressures by the cabinet manufacturer. The results
showed an average plenum supply volumetric airflow of 540 cfm (15.3 m3/min) in
each cabinet at the static pressures present in this data center.
The total airflow through all controlled openings was then calculated by adding
up all the high volume perforated tiles and all the cabinet base openings. The results
showed a total controlled volume of 81,286 cfm (2301.8 m3/min) compared to a total
supplied volume of 105,000 cfm (2932.2 m3/min); 77.4% of the cooling was deliv-
ered through controlled openings, and 22.6% entered the space through other means.
Airflow through the server cabinets is produced by the six fans on top. Each fan
is rated for 225 cfm (6.4 m3/min) at full speed, or 1350 cfm (38.2 m3/min) total for
the cabinet. The total flow into the cabinet is, therefore, a combination of the cooling
air from the raised-access floor plenum supplied through the grommets in the cabinet
base and the room air pulled into the cabinet through the openings in the front and
rear doors. When the fans run at full speed this results in 540 cfm (15.3 m3/min) of
plenum air mixed with 810 cfm (22.9 m3/min) of room air. The air entering through
the front door of the cabinet raises the temperature of the plenum air to produce a
hardware-entering temperature in the recommended range, and the air entering
through the rear door lowers the hardware discharge temperature before it is
exhausted from the cabinet. The volume of air entering through the cabinet doors
varies as the speed of the cabinet fans changes to maintain the discharge air setpoint.
The amount of plenum air entering through the cabinet base can be reduced by clos-
ing the adjustable grommets.
All return air flows back to the CRAHs through the ceiling grilles positioned
over the hardware. To verify that the return air is balanced, airflow measurements
were taken at the return grilles (see Figure 2.42). The results showed that there were
no areas of low velocity or no flow. The presence of good velocity across all the
return grille areas is an indication that the hot discharge air is being returned to the
CRAHs without recirculation to the cabinet or large server intakes. The lowest return
air volume was measured in an area directly above three PDUs. If the data center
operator wanted to change the distribution of the return air within the space, the ceil-
ing grilles could be repositioned.
Temperature Measurements
The most important temperature in a data center is the air inlet temperature
to the hardware. This confirms that adequate cooling is being provided accord-
ing to ASHRAE recommendations (ASHRAE 2004). The second most impor-
tant temperature in the data center is the return air temperature to the CRAHs.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Heat Densities
This data center exhibited a wide range of heat densities across a relatively small
area. The server cabinets contained loads ranging form 1.7–5.52 kW. Based on the
cabinet footprint, this produced a heat density range of 208–675 W/ft2. The free-
standing devices produced a wide range of loads from 2.0–23.1 kW in a single enclo-
sure. Based on the footprint of the device, this resulted in a maximum power density
of 1604 W/ft2.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The power density across the room is not evenly distributed. The east end of
the room containing the freestanding large servers, blades, and some cabinets
occupies 40% of the space but consumes 60% of the power. This produces a heat
flux of 150 W/ft2 (1600 W/m2). The west end of the room containing the majority
of the server cabinets and network frames occupies 60% of the space but consumes
40% of the power. This produces a heat flux of 73.2 W/ft2 (788 W/m2).
The overall room area of 4770 ft2 (443 m2) with the total PDU load of 504 kW
resulted in an average density of 105.6 W/ft2 (1136.7 W/m2).
Temperatures
The air temperature leaving the server cabinets is controlled at each cabinet by
a setpoint that controls the speed of the fans on top of the cabinets. Most of the cabi-
nets in this data center maintain a discharge temperature of 81°F (27.2°C). The large
free-standing devices exhibited higher discharge temperatures, with most in the
88°F–95°F range (31.1°C–35°C). However, one blade cabinet had a discharge
temperature of 112.2°F (45.5°C).
For the server cabinets, comparing the graphs of the inlet temperatures
(Figures 2.43–2.44) to the cabinet discharge temperatures (Figure 2.45) shows a
difference of less than 20°F (11°C).This results from tempering the hardware
discharge temperature with room air brought in through the rear door.
For the free-standing devices, comparing the graphs of the inlet temperatures to
the discharge temperatures shows a difference of over 40°F (22.2°C). Despite the
high discharge temperatures, the return air temperatures measured at the ceiling
return grilles were below 80°F (26.7°C), with the exception of the grilles above
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
column line DX of Figure 2.39, where it reached 83°F (28.3°C). In all cases, supply
air from the raised-access floor entering the room through uncontrolled openings
mixed with the discharge air from both the cabinets and the free-standing devices to
produce a return temperature well within the capacity of the CRAHs.
The server inlet temperatures were recorded at the ends and middle of each cabi-
net row and at the ends and middle of each row of free-standing devices. In almost
all cases, the low readings were slightly below the range recommended by ASHRAE
(2004), and the high readings were well below the high limit. This indicates that
adjustments can be made to raise these temperatures. In the case of the server cabi-
nets, the grommet openings can be reduced to introduce less supply air from the
raised-access floor to each cabinet. In the case of the free-standing devices, the
adjustable perforated tiles can be closed down incrementally to supply less cold air
to the server intake. If necessary, the supply air temperature can be raised also.
Static Pressure
The static pressure survey confirmed that the area of highest static pressure
was at the east end of the data center. This corresponded to the area of highest
installed power load. With this information, the data center operator can feel confi-
dent in being able to cool power densities higher than the average design of the
room. The static pressure increases from the west end of the room to the east end
of the room, with a slight dip in front of the CRAHs in the middle of the room. This
is the result of the increase in velocity directly in front of the middle units. The
lowest static pressure was measured at the west end of the room near the CRAHs
and near the low power density network cable frames. The static pressure readings
at the west end were not consistent from the north side to the south side. Upon
inspection of the underfloor plenum, it was discovered that a large concentration
of network cable existed directly downstream from the north CRAH. This
produced blockage sufficient to cause the unexpected high static pressure in
column line CZ of Figure 2.39.
In general, the static pressure in the data center is quite high. This is the result
of all CRAHs operating, including the redundant unit. A further study would survey
the static pressure across the floor in six different modes, with each CRAH in the OFF
position. This would allow the operator to know if the data center cooling is suffi-
cient in all redundant modes.
Return Temperature
The graphs of the return air temperatures show an increase from the west end
of the room to the east end of the room. This follows the profile of the increasing
power at the east end of the room. With the return air temperatures well within the
range of the CRAHs, this is a perfectly acceptable situation. If the power density
increases further at the east end of the room, the high static pressure at that end would
allow the data center operator to introduce more cooling air from the raised-access
floor to maintain the return air temperatures in an acceptable range.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ENERGY BALANCE
To confirm the accuracy of the data presented, via an energy balance, a calculation
of airflow using measured power input and temperature difference is compared to the
actual airflow calculated to see if they match. The overall data center airflow through
the cabinet bases, the high-volume perforated tiles, and cable openings was calculated
to be 97,641 cfm (2765 m3/min). Using the CRAH average temperature difference
(52.8°F [11.6°C] supply; 72.8°F [22.7°C] return) and the overall heat load for the
space (504 kW), the expected airflow is 92,888 cfm (2630 m3/min). This is within
4.9% of the calculated value. Both of these values are less than the measured airflow
taken at the CRAH returns of 105,000 cfm (2973 m3/min). Upon inspection of the
underfloor condition, some openings were observed in the perimeter wall. Although
measurements were not taken, the size and presence of these openings could account
for the 7359 cfm (208 m3/min) difference between the calculated value and that
measured at the CRAHs.
SUMMARY
This case study provides a review of a high density data center with a cooling
design using a raised-access floor supply plenum and a return air ceiling plenum.
The study makes use of information available to most data center operators, includ-
ing detailed power and temperature readings at the cabinet and device level, static
pressure measurements, and airflow readings. By understanding where the highest
cooling capacity will be in a data center, the operator can design his or her hardware
layout to take advantage of high density installations.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The case study of the Georgia Institute of Technology Razor HPC cluster at the
Center for the Study of Systems Biology demonstrates a solution for two parametric
challenges: space utilization and cooling. A water-cooled, rack-level heat exchanger
was deployed to help create a very high density (300W/ft2 [3.2 kW/m2]) cooling solu-
tion within an existing facility where significant cooling limitations existed. In effect,
the RDHx solution allowed for the creation of an area with cooling density ten times
greater than the capabilities of the rest of the facility.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.46 Original cooling plan (traditional raised floor cooling; non-high
density).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
cabinet. In this manner, the square footage required to house and cool the cluster was
reduced to an optimal 1000 ft2 (93 m2). Removal of such a large amount of heat from
the room airstream significantly reduced the amount of air movement necessary for the
cooling solution, thereby reducing noise and discomfort and mitigating the second
challenge. Finally, the facility had—in surplus—four spare 20 ton (240 kBtu/h)
CRACs that could provide exactly the amount of sensible air-side cooling required
with N+1 redundancy. This helped alleviate the final concern regarding the implemen-
tation schedule. Figure 2.47 shows the final floor layout, which requires only about
1000 ft2 (93 m2). The blade racks in Figure 2.47 are the six exterior racks on either side
of the four support hardware racks.
The entire high density cluster area was completely segregated from the remain-
der of the data center below the raised-access floor. This, along with the general
layout of the key components of the cooling solution, further optimized the cooling
solution in two ways. First, a very high static pressure was generated at the perfo-
rated tile locations, shown as a series of quartered tiles at the bottom of Figure 2.47.
Air was directed below the raised-access floor in the direction indicated by the
arrows on the four air-conditioning units shown at the top of the figure. By parti-
tioning the entire subfloor area, a dead-head situation was created in the perforated
tile area that maximized static pressure and airflow rates. Second, because the
CRACs were located in such close proximity to the rack exhausts, direct return of
warm air to the unit intakes was ensured to optimize unit efficiency. Finally, the hot-
aisle/cold-aisle principle was taken to the extreme —a wall completely separating
the warm and cold sides of the cluster, shown as the thick black line in Figure 2.47,
guaranteed an absolute minimum of warm air recirculation, a problem that plagues
many modern-day data centers. Transfer air ducts were included in the ceiling
between the cold aisle and hot aisle to prevent excessive pressurization. The CRAHs
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
can supply more air than the blade servers will typically require (especially when
four units are operating); therefore, bypass ducts were created to keep the CRAHs
operating at maximum airflow.
Table 2.18 presents a comparison of key parameters between the original
planned cooling solution and the hybrid solution that was ultimately implemented.
It is clear that the introduction of a water-based rack option helped to create the
desired showcase facility, with minimal floor space and air movement. The savings
are quantified in the form of air-conditioning hardware savings and space savings
(assuming that build-out of additional raised-access floor space would be required).
A fringe benefit of this solution was additional savings in the form of operational
costs. The overall efficiency of transferring heat with water is higher, and annual
savings are indicated, assuming $0.08 per kWh.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
middle, and right at six different heights). The total number of temperature measure-
ments for each support rack was 36.
For the CRAHs, the inlet air temperatures were measured at 18 points across the
intake and then averaged. The discharge temperatures were recorded from the
CRAH control panel displays. The airflow rates were measured using a flowhood
positioned atop the inlet.
The average temperature for each region of each blade server rack is shown in
Table 2.19. Equipment inlet temperatures were very cool at 57.5°F to 60.4°F (14.2°C
to 15.8°C). Note that each number in Table 2.19 is an average of 18 measurements.
At the equipment inlets, the 18 measurements were typically very uniform (i.e., 97%
of the measurements were within 0.5° of the average). The furthest outlier was 2.1°C
above the average, which proves that this cooling method was effective at preventing
hot-air recirculation over the tops and around the sides of the racks.
As the air exited the server blade chassis, it was significantly hotter at 86.9°F
to 92.1°F (30.5°C to 33.4°C). As this heated air passed through the RDHXs, a
portion of the heat was transferred to chilled water, allowing the air to exit the heat
exchangers at more moderate temperatures in the range of 70.5°F to 72.9°F (21.4°C
to 22.7°C).
Calculations for temperature change and energy are shown in Table 2.20. The
temperature rise across the server blades ranged from 27.4°F to 32.9°F (15.2°C to
18.3°C), and the temperature drop across the RDHXs ranged from 16.2°F to 20.7°F
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
(9.0°C to 11.5°C). The airflow through each rack was estimated to be 1530 cfm
(43.3 m3/min), based on the blade server manufacturer’s reported value of 255 cfm
(7.2 m3/min) per blade chassis. If temperature rise and airflow are known, then the
heat, Q, produced by the blade servers can be calculated in watts as
Q = cfm ⋅ ρ ⋅ C p ⋅ ΔT , where ρ = 1.25 kg/m3 and Cp = 1003 J/kg·K. Likewise, the
heat removed by the RDHXs can be calculated using the same formula, with ΔT
being the temperature drop across the rear door. The RDHXs removed 57% to 63%
of the blade heat load.
Temperature data for the support racks are listed in Table 2.21. Equipment inlet
temperatures were similar to those measured for the blade racks, which is expected.
The temperature rise across the support equipment was low compared to the blades,
which confirms the decision to leave the heat exchangers off of the support racks.
Information for the CRAHs is provided in Table 2.22. Here, ΔT is the difference
between inlet and discharge. Heat removal is calculated using the same formula as
before. The total heat removal using this analysis is 89.8 kW. For reference, the design
airflow for the three nominal 20 ton units was 30,000 cfm (849.5 m3/min).
The equipment inlet temperatures, as shown in Tables 2.19–2.21, were below
the ASHRAE recommended range of 68°F–77°F (20°C–25°C). This suggests that
the setpoint for the CRAH discharge temperature could be raised a few degrees. This
would raise the equipment inlet temperatures and, in turn, raise the equipment outlet/
heat exchanger inlet temperatures. The result would be a higher percentage of heat
removed by the RDHXs because that mode of heat transfer is strongly influenced by
the air/water temperature differential. Recall that it is more efficient to transfer heat
with water versus air.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
A summary of the rack heat loads and the heat removal calculations is provided
in Table 2.23. The actual heat loads at the time of the study were 67% of the planned
loads. The heat removed by the CRAHs is calculated by subtracting the heat
removed by the RDHXs from the actual heat loads. The resulting total for CRAH
heat removal is in close agreement with the figure calculated in Table 2.22, which
was based on mass flow rate and ΔT .
SUMMARY
Increasing heat densities and the desire to pack more computing power into
smaller spaces created a number of challenges for deploying a powerful supercom-
puter at the Georgia Institute of Technology Center for the Study of Systems Biol-
ogy. The facility was required to be of showcase quality, with fully utilized floor
space, as well as minimal discomfort from noise and air movement. A hybrid cooling
solution featuring a water-based rear-door heat exchanger proved to be the most
effective way to create an optimal solution within the parameters given. The device
is capable of removing 50% to 60% of the heat load within a rack, allowing for maxi-
mum packing density for the blades in the cluster and an optimal floor space require-
ment of 1000 ft2 (93 m2). The total requirement for air conditioning was cut roughly
in half, minimizing cooling hardware and air-moving requirements. This solution
will serve as an effective model for how end users can achieve high density cooling
solutions as they transition from today’s data center facilities to future designs.
Table 2.23 Summary of Rack Heat Loads and Heat Removal Calculations
Blades Racks, Support Racks, Total,
kW kW kW
Planned heat load (from Table 1) 278.0 21.0 299.0
Actual heat load 186.4* 14.0** 200.4
Actual heat removed by RDHX 112.4* 0.0 112.4
Heat removed by CRACs 74.0 14.0 88.0
* From Table 2.20
** Reported by equipment
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
the horizontal heat exchanger and then sucked back down by the fans, providing
a two-pass cooling solution. The idea was further analyzed in detail and opti-
mized using CFD. For example, the ratio between the open coil area and the area
covered by the fan trays, as well as the optimum distance between the top of the
coil and the top of the enclosure, were optimized on prototypes using both theo-
retical calculations and practical measurements.
The fan-coil prototype, with dimensions of 5.9 ft (1.8 m) wide, 5.9 ft (1.8 m)
deep, and 1.6 ft (0.5 m) high, consisted of an enclosure, an air-to-fluid coil, fan trays
with axial fans, and an electrical module. The air passed through the coil twice, first
upward and then down and through the fans. Since the two fan trays can be moved
horizontally, the cold air can be directed toward the air inlet of the equipment,
depending on the location of the cold aisle (see Figure 2.51). For testing, DC fans
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
with a total maximum capacity of approximately 7000 ft3/min (12000 m3/h) were
chosen because they were easy to control; AC fans were available as an alternative.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
In the room, nine prototype fan-coils were installed and hydraulically connected
(see Figure 2.55). The figure shows one of the three fluid circuits and the interleaved
layout of the fan-coils to aid in redundancy. Each coolant distribution unit (CDU)
was connected to three fan-coils. If there was a failure of one CDU, the remaining
two provided the cooling. The room had typical office walls mounted on the intake
side of the rack with approximately 68°F (20°C) surrounding temperature. The floor
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
in the test room was a solid raised-access floor with air under the floor approximately
55°F (13°C). However, during the hybrid testing, perforated floor tiles were installed
to study the impact of running DataCoolTM together with traditional raised-access
floor cooling.
System Construction
The CDU houses the heat exchanger between the system fluid to the building
chilled fluid, the control valve, the dual redundant pumps, and the system controls.
The CDU was typically housed in the facility mechanical room and controls all
aspects of the system, so that the room temperature was kept constant. However,
since the fluid temperature was never allowed to go below the dew point, only sensi-
ble cooling was required by the fan-coils. Each of the fluid circuits also had a fluid
management system to detect any fluid flow disturbance and immediately evacuate
the fluid in the circulation loop, if necessary.
MEASUREMENT TOOLS
In order to evaluate the cooling capacity of the room temperature, probes were
located at four different heights (1.57 ft [0.48 m], 3.1 ft [0.94 m], 4.6 ft [1.4 m] and
6.1 ft [1.85 m] above the floor) at the air intake for each rack monitored. In addition,
room temperatures, fluid temperatures, and air humidity were monitored during the
test/verification.
Capacity Testing
For a certain airflow through each fan-coil, the electric heat loads in the racks
were sequentially turned on until temperatures reached a specified maximum air
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
temperature at any of the 72 temperature probes. At that point, the heat capacity was
determined by measuring the amp draw to the electric heaters in the racks. The heat
load capacity of the room was based on when one of the inlet temperatures exceeded
a specified 86°F (30°C) ambient temperature.
The capacity testing was performed with several basic equipment configura-
tions (see Figure 2.56):
Brief Test Results. Table 2.24 shows the maximum heat load cooled by the
system when the maximum inlet temperature to any of the 18 racks, at any height,
reached 86°F (30°C).
Failure Modes
To understand the resiliency of the technology, several failure modes were
simulated during the test.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.57 shows which units were turned off as a function of location duringeach test.
Brief Test Results. Table 2.25 shows the temperature difference between the
maximum inlet temperature to any of the 18 racks, at any height, at normal condi-
tions, and the maximum inlet temperature to any of the 18 racks, at any height, after
one hour of failure mode. The heat load was 3080 W/m2, and approximate airflow
for each fan-coil in operation was 6474 ft3/min (11,000 m3/h).
Note that the loss of one unit has minimal impact on the temperatures in the
room, while a loss of 33% of the cooling had a much larger impact.
Transient testing was done for different heat loads to collect empirical data for
comparison with theoretical calculations. Since the transient values are logged with
the cooling system off, the data are more dependent on the test room and the equip-
Figure 2.57 Top view indication of fan-coils turned off during failure mode
testing.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ment in the room than the cooling system itself. A summary of the results can be seen
in Figure 2.58.
Gradients
Gradients were logged during some of the tests in order to verify the thermal
behavior of the cooling system in the room.
Brief Test Results. Figure 2.59, shows the maximum difference in temperature
for four vertical points in the room for a hot-aisle/cold-aisle equipment configuration
and 3080 W/m2 heat load. The approximate airflow for each fan-coil was 4708 ft3/
min (8000 m3/h). Temperatures in bold are for points 1.57 ft (0.48 m), 3.1 ft (0.94
m), 4.6 ft (1.4 m) and 6.1 ft (1.85 m) above the floor. Temperatures in Roman type
are for points 0.16 ft (0.05 m), 3.0 ft (0.9 m), 6.0 ft (1.83 m), and 8.0 ft (2.44 m) above
the floor.
Hybrid System
The capacity of the existing DataCool system together with the raised-access floor
was tested in two steps. First, the capacity of the existing raised-access floor was tested
with as many perforated floor tiles as could practically be installed. Next, the combined
capacity of the existing raised-access floor and DataCool system was tested.
Brief Test Results. Table 2.26 shows the heat load when the maximum inlet
temperature to any of the 18 racks at any height reached 86°F (30°C) for raised-
access floor and for a hybrid of raised-access floor and DataCool together. The
approximate airflow for each fan-coil was 4708 ft3/min (8000 m3/h).
Note the hybrid test shows that the two cooling schemes can coexist to boost
capacity even further, in this case by 317 Btu/h·ft2 (1 kW/m2).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
as shown by the arrows labeled 1–4 in Figure 2.60. Each recirculating opening pair
was assigned a flow of 600 cfm (0.28 m3/s). Alternating pairs were assigned a heat
load of either 0 or 12,284 Btu/h (3600 W) such that only two compartments within
a rack were at full power, and each rack was dissipating 24,567 Btu/h (7200 W). The
rack, thus defined, was arrayed across the room with geometry as defined in the proto-
type data center.
The DataCool heat exchangers were modeled as shown in Figure 2.61. The
appropriate heat transfer attributes were assigned to simulate the performance of the
real heat exchangers based on earlier characterization tests. The heat transfer attri-
butes are identified in terms of the effectiveness, ε, of the cooling coil. The following
are the key attributes:
Q hex = ε ( mc p ) min ( T h, in – T c, in )
where ε is the effectiveness of the heat exchanger, (mcp) is the capacity of the fluid,
the subscript min refers to the fluid (hot air from room or cooling fluid) with the
minimum capacity, and Tc,in is the inlet temperature of the cooling fluid. In our
example, the hot air from the room, drawn through each heat exchanger, is the one
with minimum capacity.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.62 is an image of the DataCool CFD model, and Figure 2.63 compares
the DataCool model to the actual deployment in the prototype data center. The heat
exchanger’s three-dimensional geometry is created using the Enclosure, Cuboid,
and Volume Resistance object types in Flovent (Flometrics 1999). A recirculating
opening is applied with following heat exchanger characteristics:
The racks and heat exchangers, thus defined, are arrayed across the room to
form the room model as shown in Figure 2.64. The room is modeled as adiabatic with
no-slip boundary conditions (Flometrics 1999). A grid is applied across the model
as a last step in preprocessing.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.63 Image of the heat exchanger modules in the prototype data
center.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
by all the heat exchangers should be equal to the heat dissipated in the room. Such
an energy balance was used as a preliminary check of the modeling.
Results
The simulation results are compared with measurements obtained from the proto-
type data center. Figure 2.65 is a plan view of the room. Locations in which compar-
isons were made are given numerical designations (circled). In addition, racks are
labeled for a subsequent comparison. Heights of 0.16 ft (0.05 m), 3.3 ft (1.0 m), and
6.6 ft (2.0 m) from the floor are compared between measured and modeled data.
Figures 2.66–2.68 display the results of the plan view comparisons at the indi-
cated heights. At Y = 0.16 ft (0.05 m), both the experimental and numerical results
show hot spots in the room in areas 1–5 and 10. Thermal gradients within the room
are also in general agreement, with absolute values showing less agreement. Loca-
tions 7 and 11, in particular, exhibit disagreement in absolute value as well as
trends. Similar results are observed at Y = 3.3 ft (1.0 m) with discrepancies in abso-
lute values instead occurring at points 2 and 6. Similar agreement is shown at Y =
6.6 ft (2.0 m). The primary areas of disagreement between the numerical and
experimental results are most likely a result of simplifications made to the model
in combination with the removal of incidental physical detail, such as tubing and
ceiling members. A major result of the analysis, which is also in agreement with
the experiment, is that the portion of the room opposite the door (locations 1–6 and
10 in Figure 2.65) is hotter than that near the door, especially near the floor. This
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
is due to the asymmetric spacing of the rack rows in the Z direction and may not
be obvious without the aid of analysis by the individual responsible for designing
the room layout and cooling infrastructure.
Figure 2.69 compares selected experimental and numerical inlet temperatures.
Rather than report inlet data for all 72 compartments, potential problem areas were
identified (Figures 2.66–2.68) by looking for hot spots in the plan views. Correspond-
ingly, inlet temperatures were examined in regions where hot spots were found. In
Figure 2.69, components within a rack are numbered from bottom to top (position 1
is the bottom-most component, position 4 the top-most). Results indicate that the
simulation adequately captures the experimental pattern. Both inlet temperatures and
rack-level thermal gradients correlate well with the experiment and accurately indi-
cate where improvements in the thermal management of the room can be made.
SUMMARY
This case study outlines the work done in Hewlett-Packard’s Richardson data
center for high density cooling. The work demonstrates some important points:
1. Over 557 W/ft2 (6000W/m2) was achievable with racks at 49,000 Btu/h (14.4 kW)
using overhead cooling with some underfloor makeup air. At the time, this was
unprecedented density and proved that high density cooling is viable.
2. The apparent capacity of the data center can be improved even with overhead
cooling by transitioning from all servers facing the same way to a hot-aisle/
cold-aisle configuration. The apparent cooling capacity went from 286 W/ft2
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
(3080 W/m2) to 470 W/ft2 (5060 W/m2). This was a 64% improvement by
using industry hot-aisle/cold-aisle best practices.
3. Failure of cooling in high density applications will show rapid increase in
temperature, which needs to be provided for in the back-up strategy.
4. This work also validated the use of computational fluid dynamics in data center
environments.
REFERENCES
Bash, C.B. 2000. A hybrid approach to plate fin-tube heat exchanger analysis.
Proceedings of the International Conference and Exhibition on High Density
Interconnect and Systems Packaging, Denver, Colorado, pp. 40–8.
Flometrics. 1999. Flovent version 2.1. Flometrics Ltd., Surrey, England.
Patel, C., C. Bash, C. Belady, L. Stahl, and D. Sullivan. 2001. Computational fluid
dynamics modeling of high density data centers to assure systems inlet air
specifications. Proceedings of InterPACK 2001 Conference, Kauai, Hawaii.
Stahl, L. 1993. Switch room cooling—A system concept with switch room located
cooling equipment. Proceedings of INTELEC 1993, Paris, France.
Stahl, L., and C. Belady. 2001. Designing an alternative to conventional room
cooling. Proceedings of the 2001 International Telecommunications Energy
Conference, Edinburgh, Scotland.
Stahl, L., and H. Zirath. 1992. TELECOOL, A new generation of cooling systems
for switching equipment. Ericsson Review 4:124–92.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study highlights the importance of creating a physical barrier between
cold supply (see Figure 2.70) and hot return air on the data center floor. A conven-
tional hot-aisle/cold-aisle rack configuration in data centers has worked well when
rack power loads are low—typically less than 4 kW per rack. However, with increas-
ing rack loads, excess cold air must be supplied to the cold aisle to reduce hot spots
near the top of the racks that result from hot air diffusing into the cold aisle. A large
fraction of the excess cold air also bypasses the electronic equipment and returns back
to the air conditioning units. This practice is energy inefficient; it increases fan energy
use and requires more energy to produce colder air. A physical barrier between hot
and cold air streams within a data center is needed to avoid mixing of cold air with
hot air. A new approach to physically separate the cold and hot air streams within a
rack was selected and implemented in a high power density section of a large data
center. The selection was based on energy as well as other practical considerations.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This case study discusses the high-density rack hot-air containment approach—the
rationale for its design and its advantages and limitations—and presents data on its
energy performance.
The field performance of the high power density section of a large enterprise-
level data center is reported here. This data center was constructed in phases. The
first phase used the then state-of-the-art practice of hot-aisle/cold-aisle rack arrange-
ment on the data center floor. The chilled-water CRACs are located on both ends of
the rack aisles and supplied cold air under the raised floor. Supply air is delivered
to cold aisles through perforated tiles in front of the equipment racks. Hot air from
the equipment racks is discharged into the hot aisle, from which it is drawn back into
the CRAC units. Our field observations confirmed the increased supply air temper-
atures to equipment near the top of the racks since the discharged air from hot aisles
was being drawn back in to the cold aisles. We also noticed that some of the cold air
did not go through the electronic equipment but was instead drawn directly back to
the CRACs without providing any cooling. A further review of data for total airflow
from the CRACs and total airflow required for the electronic-equipment cooling
indicated the CRACs supplied far more air than was required.
The data center design and operations team decided to address the above issues
during the design and construction of the second-phase expansion of the data center
in 2003. The expansion included a 12,000 ft2 (1115 m2) high power density area
with an average rack power of 6.8 kW per rack for 640 racks, or an average power
load of 170 W/ft2 (16 W/m2). The team decided to increase the cooling energy effi-
ciency by reducing unnecessary airflow on the data center floor; by supplying
enough cold air to match the airflow requirements of the electronic equipment, the
airflow rates would be reduced to less than one-half those of the existing system.
In order for us to adjust airflow demand to electronic equipment, which would vary
as new equipment was brought in or older equipment was removed, we decided to
install variable-speed drives on the CRAC fans. However, CFD modeling showed
that slowing the airflow increased the risk of hot-air infiltration in the cold aisle,
causing high supply air temperatures, especially near the top of the rack and racks
on the end of the aisle. The high temperatures, in turn, required even colder supply
air temperatures, thus negating some of the energy efficiency gains from the airflow
reduction. The team decided to create a physical barrier between the hot and cold
air on the data floor to prevent mixing. This also allowed us to raise our supply
temperature without concern for reaching unacceptably high temperatures near the
top of the rack due to hot-air infiltration if there was no physical separation. The
team considered different arrangements of barriers between hot and cold air, such
as cold-aisle containment and hot-aisle containment, but elected to use a rack enclo-
sure with a discharge duct to ceiling return air plenum.
The system worked very well and provided excellent energy savings with
measured simple payback of less than six months. The fact that the CRAC fan speeds
could be adjusted to meet the equipment load requirement, and the equipment in the
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
data center was loaded gradually over a period of time, meant that the fans could run
more slowly. This reduced the direct energy use and indirect cooling energy use
required to remove heat generated by fans.
The system utilizes raised-floor access air supply and hot-air containment
through a rack exhaust duct to a ceiling return plenum. A supplemental fan in the
exhaust duct aids in overcoming additional pressure loss. The floor layout is divided
into five 2400 ft2 (233 m2) sections. For this study, the environmental attributes—
rack power loads, floor-tile flow rates, IT equipment flow rates, IT equipment intake,
and exhaust air temperatures—for one of the five 2400 ft2 (233 m2) sections were
collected and analyzed. Spot measurements were also taken from the four other
2,400 ft2 (233 m2) sections, which reinforced the reported data for the first section.
Figure 2.71 One of five high-power density areas showing rack layout,
CRAC locations, and floor tile locations. Tiles shown in the
darker shade represent floor grate locations.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
the heat containment exhaust duct. The rack widths are 2 ft (0.61 m), rack heights
are 7.33 ft (2.24 m), and rack depths are 42 in. (1.07 m). The 2 ft (0.61 m) floor pitch
is maintained with a 3 ft (0.91 m) aisle for air delivery and a 4 ft (1.22 m) aisle
between the rears of the racks. With some variation, each rack has a typical config-
uration of servers and a single small network switch. The racks have solid rear doors
with solid bottom and side panels and cable seals to eliminate cool air bypass into
the warm exhaust/return airstream. A 20 in. (0.51 m) round duct on top of each rack
connects to the return air plenum. It contains a 16 in. (0.406 m) low-power variable-
speed fan (maximum 90 W) with a maximum airflow rate of approximately 1600
cfm (45.3 m3/min).
The raised-access floor height is 30 in. (0.76 m), and there are no significant
underfloor blockages to consider for this study. Measured flow rates from the open
floor tiles and grates confirm good underfloor pressure distribution.
The return air plenum is located 12 ft (3.66 m) above the raised-access floor and
is constructed using standard drop-ceiling materials with a 24 in. (0.61 m) grid
pattern to match the floor tiles. The roof structure is 25 ft (7.62 m) above the raised-
access floor. There is an additional fire-rated drop ceiling 5 ft (1.52 m) below the roof
structure, leaving a return air plenum height of 8 ft (2.44 m), as shown in Figure 2.72.
During a utility failure, balancing vents installed in the ceiling plenum vertical wall
allow air to exit the plenum into the open area of the data center before returning to
Figure 2.72 Rack layout with exhaust duct from racks to ceiling plenum.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
the CRACs. This longer return path extends the cooling through an increased ther-
mal mass and, thus, extends critical operation during a utility failure.
The 40 ton (140 kW) variable-speed chilled-water CRACs are connected to the
return air plenum using extensions, as shown in Figure 2.72. The CRAC temperature/
humidity sensor was removed from the typical return air location in the CRAC and
moved to the underfloor supply air plenum approximately 10 ft (3.05 m) in front of the
CRAC. The CRAC control was changed from return air temperature setpoint control
to supply air temperature setpoint control. The setpoint for the CRAC supply temper-
ature is currently 68°F (20°C). CRAC unit RH sensing mode is changed from relative
(direct) to absolute (predictive) to allow more stable control of RH. CRAC variable
frequency drive (VFD) in each section maintains underfloor pressure at a setpoint of
0.04 in. (1.016 mm) w.c. CRACs are controlled by a BMS utilizing differential pres-
sure sensors located under the floor and operated on proportional-integral-derivative
control loop for steady operation. A separate central air-handling system provides
makeup air, facility pressurization, and humidity control.
MEASUREMENT INSTRUMENTS
Power measurements were taken using a Fluke 434 three-phase power quality
analyzer. Airflow measurements were collected using a Lutron AM-4204A hot-wire
anemometer and a 0–0.25 in. (0–6.35 mm) WC Magnahelic® differential pressure
gauge. Temperature measurements were collected using an Extech model 421501
hand-held thermometer utilizing a type K thermocouple.
MEASUREMENT METHODOLOGY
Power
Rack power was measured at the 208 V three-phase branch circuit breakers.
Rack power is dual fed, so a simple addition of the A and B feed power results in the
total rack power. Incidentally, power measurements collected were, on average, 60%
of the power reported by the hardware nameplate data.
Temperature
Temperature measurements were collected at three vertical locations 2 in. away
from the IT equipment intake grills. IT equipment exhaust temperatures were spot
checked and compared to the averaged rack exhaust temperature. In most cases, the
average rack exhaust temperature was a few degrees lower than the hottest IT equip-
ment exhaust temperature. Understanding that the IT equipment exhaust grills vary
in location and size for the different functions of the equipment, resulting in a variety
of temperatures and airflow rates, the slight drop in rack exhaust temperature as an
aggregate made sense and was accepted. Analyzing supply air temperatures to rack
inlet temperatures will give a good indication of hot-air recirculation. Analyzing
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
return temperatures from the IT hardware to the CRACs will give good insight into
bypass air conditions.
Airflow
Floor tile supply air measurements were collected using a hot-wire anemometer
across a traverse plane and were compared to a measured underfloor pressure at the
tile and the corresponding flow rate as supplied by the tile manufacturer. The hot-
wire readings were collected and averaged using the equal-area method of traversing
a 2 by 2 and 4 ft (0.61 by 0.61 and 1.62 m) tall exhaust duct placed over the floor
tile. Rack airflow rates were calculated using the heat-transfer equation from
measured total rack load and temperature rise. The racks airflow rates are balanced
to a slightly negative pressure to ensure the rack flow rate had a similar flow rate as
the IT equipment load. This ensured little-to-no bypass air would dilute the rack
temperature rise reading. An attempt was made to validate the IT equipment airflow
rates using the hot-wire anemometer across a traverse plane created with a 17 by 7
and 24 in. (43.18 by 17.78 and 60.96 cm) intake duct placed over and sealed around
two server face plates. The velocity measurements were too unstable to measure
accurately and, in the physical space, it was not possible to extend the traverse plane
duct. With the rack exhaust fan airflow rate equivalent to the aggregate server airflow
rates for the rack due to a zero or slightly negative differential pressure in the rear
rack plenum, there was confidence in the single method of data collection.
MEASURED DATA
Figures 2.73–2.82 provide a comparison of power, temperature, and airflow for
the four selected rows within the high density section. For each of the four rows, the
data graphs will first provide a comparison of rack power and rack intake/exhaust
temperatures. The CRAC return temperatures are included in this section for the
purpose of understanding any dilution of the air temperature as it left the equipment
rack. The next comparison features tile airflow rates and rack airflow rates, including
a total aggregate of the tile supply and rack flow. Finally, the CRAC fan speed and
power usage are measured and recorded and compared to CRAC full-on power
consumption (see Table 2.27).
ANALYSIS
Airflow Distribution
Well distributed airflow through floor tiles is attributed to CRAC fan control
using VFD to maintain a consistent underfloor pressure and supply air. Balancing the
volume of air delivery to optimize cooling provisioning was necessary by adjusting
the quantity of 56% open floor grates. Fourteen grate and six perforated tiles allowed
the CRACs to operate at 51 Hz or 86% fan speed to deliver 63,684 cfm (1803 m3/
min), providing a 6% oversupply. More work is required to balance all rows and
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.73 Underfloor supply air volume flow. Gray bars: 25% open floor
tiles; black bars: 58% open floor tiles.
Figure 2.74 CRAC supply and return-air temperatures. Gray bars: supply
air temperature; black bars: return air temperature.
sections within the high density area to achieve near unity balance. The cold row orig-
inally had six perforated tiles and fourteen grate tiles, which provided 44,000 cfm
(1246 m3/min) for the two aisles analyzed. The supply rate was not sufficient for the
IT equipment load, which required 59,877 cfm (1696 m3/min), leaving a shortage of
16,000 cfm (453 m3/min). In a conventional cooling arrangement, this 26% under-
provisioning of cooling would have caused the inlet temperatures to exceed recom-
mended standards; however, the inlet temperatures continued to stay below 77°F
(25°C) due to the heat containment system. Using all 56% grates provided 82,000 cfm
(1096 m3/min) for the two aisles analyzed. The supply rate exceeded the IT equip-
ment load requirement, delivering an oversupply of 12,000 cfm (340 m3/min) or 36%
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
overprovisioning. CRACs for this arrangement were operating at full speed to main-
tain the setpoint for underfloor pressure. (See Figure 2.83.)
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 2.83 Row airflow supply and rack airflow return aggregate.
(22°C), demonstrated this effect with 148 cfm (4.2 m3/min) of 76.8°F (24.9°C) air
flowing from the rear to the front. CFD studies are required to further evaluate the
effects of warm air entrainment at lower rack conditions due to Venturi effects
caused by high-velocity floor grates.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Rack exhaust temperatures were measured and compared to the return air
temperatures in order to validate little-to-no entrainment of cool air into the warm
exhaust air. Rack exhaust and CRAC return temperature comparison is also important
to understand effects of the 6% overprovisioning on reduction in return air temper-
ature. An average of 4°F (2.22°C) reduction of CRAC return temperature versus rack
exhaust temperature indicates near unity efficiency of air distribution. Additional
work is required to balance the remaining sections of the high density area.
The IT equipment in rack positions DN and DO130 faces both the front and rear
of the rack. In this configuration, the equipment on the back side of the rack pulls
the intake air from what would typically be a hot aisle if hot air was not contained.
In the containment approach, intake temperatures are within ASHRAE (2004) class
1 limits and are measured 76°F (24.4°C) at 4U, 71°F (21.67°C) at 20U, and 74°F
(23.33°C) at 40U rack heights.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Additional energy savings are possible based on higher supply air temperatures.
Higher supply air temperatures allow higher chilled-water temperatures, which
result in greater chiller efficiency and additional free hours of cooling.
SUMMARY
CFD modeling and published studies have shown that twice as much air at
colder-than-required temperatures is being delivered to maintain the upper temper-
ature limit of the ASHRAE (2004) class 1 standard. There is a significant cost asso-
ciated with oversupply, as well as missed opportunity to efficiently operate the
CRACs and chiller plant to further reduce operational costs. Over-provision of the
cool supply air at temperatures below 68°F (20°C) did not allow ASHRAE class 1
conditions to be maintained because supply temperatures were well below the lower
temperature limit of ASHRAE class 1. Predictable temperatures at the intake of the
IT equipment were maintained, even when supplying only 6% more airflow rate at
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
REFERENCES
ASHRAE. 2005. Design Considerations for Datacom Equipment Centers.
Atlanta: American Society of Heating, Refrigerating, and Air-Conditioning
Engineers, Inc.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
3
Non-Raised-Access Floor
Case Studies
3.1 NON-RAISED-ACCESS FLOOR WITH ROW COOLING
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
install a raised-access floor. Additionally, as seen in Figure 3.1, the room is compact
and irregularly shape, further complicating the layout of IT and cooling equipment.
While conventional raised-access floor perimeter CRACs were considered, the
room shape and power density made it impractical to achieve the desired cooling and
redundancy levels using a traditional legacy approach.
The Cedar-Sinai Data Center can be broken down into six rack categories:
server, console, storage, networking, cooling, and power. The total solution is
composed of 26 equivalent rack positions, two of which are wide racks for the PDUs,
each serving zones 1 and 2.
Rack quantity breakdown by type (see Table 3.1):
The final as-built layout created one area of particular interest. The integrity of
the cold aisle in zone 1, upper-right corner of Figure 3.1, is influenced directly by
warm air exhausted from rack 15 containing two network switches. Racks 16, 17,
and 18 were oriented with the fronts of the racks pulling air from the cold aisle.
However, these racks exhaust air against a wall with a return air path across the top
of the cold aisle of zone 1 before returning to a common hot aisle. Consequently,
zone 1 has been divided into two cooling groups: RC1–3 and RC4–7. The control
architecture is such that the RCs within a given group operate at a synchronized fan
speed while independently maintaining their own supply air temperature. This
allows RC1–3 to deal with local thermal challenges without penalizing the effi-
ciency of the four other RCs in zone 1. The four other RCs (RC4–7) would have little,
if any, beneficial effect on the area of interest even if their fans were operating at the
higher synchronized speeds of RC1–3.
MEASUREMENT TOOLS
Field measurements were taken to evaluate the overall health of the data center
and effectiveness of the implemented cooling solutions. These measurements used
both native capability of the installed equipment along with temporary equipment
for the purpose of measurement and data gathering.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Airflow Measurements
Each rack type for IT equipment was traversed at 30 points. The average veloc-
ity was multiplied by the flow area to establish airflow rates. Racks of similar config-
uration were assumed to have more-or-less equivalent airflow (see Table 3.2).
Temperature Measurements
Along the line of vertical symmetry, temperature measurements were taken of
each rack at three vertical heights on both inlet and outlet. Measurement elevations
are shown in Figure 3.2. Temperatures were logged for a period of five minutes and
the average value was computed for each sensor position. Additionally, temperature
measurements were taken of each RC at the same elevation as the racks on both the
inlet and outlet. Given the limited number of temperature sensors and data acquisi-
tion channels, it was not possible to take all of the readings concurrently. However,
the RC temperatures for a given zone were taken concurrently with adjacent racks
for both hot aisle and cold aisle.
Figures 3.3–3.4 represent the data captured during this period for the cold aisle
side of the racks. Each rack position and RC is depicted with upper, middle, lower,
and average temperature values. None of the observed temperatures at rack inlets
exceed the ASHRAE (2004) recommended limit of 77°F (25°C). The average rack
inlet air temperature for all measurements of zone 1 was 66.6°F (19.2°C), with a
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
standard deviation between sensors of 3.5°F (1.9°C). The average rack inlet air
temperature for all measurements of zone 2 was 65.2°F (18.4°C), with a standard
deviation between sensors of 3.6°F (2.0°C).
CFD Analysis
A CFD study of the data center was used to further analyze the area of interest
where hot air from racks 16, 17, and 18 circulates above the cold aisle of zone 1 before
entering into the common hot aisle. Figure 3.5 clearly depicts a finger-like shape of
warm air extending out and over the cold aisle for zone 1. The volumes indicated by
the areas shaded in orange reflect areas with temperatures equal to or exceeding 75°F
(23.9°C). The CFD was base on measured airflow and power levels recorded during
the site survey along with the as-built geometry of the space.
The CFD clearly shows the temperature at rack inlet faces to be below the 75°F
(23.9°C) thresholds.
Computed average rack inlet temperatures compare well with the average
values measured for each rack based on the three temperature measurements taken
at the specified heights (see Tables 3.3–3.4). The combined average error between
the CFD and average of all actual inlet temperatures measured is only 0.4°F
(0.22°C). While the scope of the case study is not to validate CFD as a tool, the
degree of correlation speaks well for the overall accuracy of data recorded.
Airflows for the three groupings of RCs are shown in Table 3.5. These values
reflect real-time measurements taken concurrently with temperature and load data.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
CFD 64.0 (17.8) 64.5 (18.1) 64.5 (18.1) 64.0 (17.8) 64.4 (18.0) 64.0 (17.8)
Actual 63.7 (17.6) 69.3 (20.7) 67.3 (19.6) 62.6 (17.0) 63.1 (17.3) N/A
Difference –0.3 (0.2) 4.8 (2.6) 2.8 (1.5) –1.4 (–0.8) –1.3 (–0.7) N/A
Actual 64.8 (18.2) 64.9 (18.3) 63.8 (17.7) 64.2 (17.9) 64.2 (17.9) 68.2 (20.1) 65.2 (18.4)
Diff. –1.7 (–1.0) –1.4 (–0.8) –0.8 (–0.4) –0.3 (–0.2) –0.2 (–0.1) 3.8 (2.1) 0.7 (0.3)
The data show the majority of RCs operating at only 44% to 46% full airflow while
supporting an average rack inlet temperature of only 65°F (18.3°C). There is signif-
icant reserve in operating fan speed, chilled-water flow through the coil, and temper-
ature to allow for increased loading and or redundancy considerations.
Energy Balance
The instrumentation and equipment installed in the room allowed for three sepa-
rate assessments of load: electrical, water-side cooling, and air-side cooling (see
Table 3.6). Given the nature of measurements and equipment used for monitoring,
some degree of error was expected among the three sources. However, it was surpris-
ing that the air-to-electrical reconciled more favorably than the electrical-to-water.
The water-side number may be biased slightly higher due to modest latent loads in
the conditioned space.
It should be noted that the thermal exchange value between the conditioned room
and adjacent spaces is unknowable. Observation made during the data gathering
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
suggest that there should be a net energy gain through the walls, as surrounding areas
were generally observed to be warmer than the conditioned space.
SUMMARY
This data center adheres well to ASHRAE thermal performance guidelines
(ASHRAE 2004). Additionally, the total fan power of the cooling system is only
3 kW compared to the net sensible load of 66 kW. Were this room built using a
conventional perimeter CRAC with similar full capacity and redundancy, it would
have required an estimated fan power of 18 kW versus 3 kW (assuming fixed
airflow, online N+1 redundancy, 0.3 in. (75 Pa) floor pressurization, and 72°F
[22.2°C] return air).
The choice of 65°F (18.3°C) for both the supply air temperature and rack inlet
temperature settings is less than optimal. Additional energy gains could easily be
realized by increasing the supply air setting to 68°F (20°C) and rack inlet temper-
ature to 72°F (22.2°C). This would allow a lower global air ratio (total cooling
airflow/total IT equipment airflow), resulting in an additional fan power reduction.
Rack inlet temperatures of 65°F (18.3°C) are not necessary to ensure reliable oper-
ation of the IT equipment.
The particular row-level cooling products used allow group coordination of up
to 12 individual units into one synchronized group. This particular case only
required a total of 11 InRow coolers but, theoretically, they all could have been
combined into one large group. Avoiding this single large group approach is ideal
in cases that do not allow each unit within the group to substantially contribute to the
cooling of all racks targeted by the group. This particular deployment subdivided
zone 1 into two specific groups along with a single group for zone 2. It is clear from
the layout in Figure 3.1 that thermal requirements in zone 1 for RCs 1–3 could not
have been substantially supported by the other four RCs in this zone. The decision
to subdivide this zone into two groups prevents the operation of RCs 4–7 at fan
speeds above the needs of their targeted racks.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Positioning the row coolers at the end of rows is ideal and prevents undesirable
recirculation of the CRAC air. This placement also ensures a higher capture index
for the CRACs and, thereby, a lower global air ratio (VanGilder 2007).
The cooling equipment matches well to the theoretical IT load with N+1 redun-
dancy. This favorable gross capacity match is further aided by the dynamic fan
response provided by the particular cooling product chosen. This allows for a very
close match between cooling airflow and IT airflow, greatly reducing electrical
demand created by cooling equipment fans.
REFERENCES
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
VanGilder, J.W. 2007. Capture index: An airflow-based rack cooling performance
metric. ASHARE Transactions 113(1):126–36.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
1. Based on rack configuration, high density of computers, and the absence of large
mainframe servers common in older data centers.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Based on a qualitative observation of the data center occupancy, the computer load
density at full occupancy is extrapolated. In addition to the typical W/ft2 metric, the
density is also calculated based on the number of racks and the rack footprint.
Additional information was collected so that the efficiencies of the cooling
equipment could be calculated. These efficiencies are compared to the design effi-
ciencies. Opportunities for energy-efficiency improvements are described, which
are based on observation of the mechanical system design and measured perfor-
mance. General design guidance is presented for consideration in future construc-
tion. Data center specific recommendations are made for the as-built systems.
Site Overview
Facility 6 is located in Silicon Valley in California. Two data centers were moni-
tored for energy consumption at Facility 6. The data centers are in separate office
buildings and constitute a relatively small percentage of the total building area (less
than 10%). The data centers are 2400 ft2 (223 m2) and 2500 ft2 (232 m2), respectively.
Since the data centers represent a small percentage of the overall building area,
whole-building power consumption is not relevant to determining the data centers’
power consumption and was not monitored. Both data centers house servers and stor-
age drives and operate 24 hours a day. Data center 6.1 serves corporate needs, while
data center 6.2 is mainly used for research and development of new engineering prod-
ucts. Occasionally, during normal business hours, a small number of employees may
be in the data centers working with the computers (see Figure 3.6).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The facility utilizes a balanced power 225 kVA UPS to provide a constant supply
of power to the data center at constant delivery voltage (480/277 V). The UPS converts
and stores alternating current as direct current in multiple battery packs. To power the
IT equipment, power is converted back to alternating current. In the event of a power
loss, a 400 kW diesel generator provides power for approximately ten hours.
Spot power measurements were taken at the UPS, at both the input and output,
to determine computer plug loads and losses at the UPS system (see Table 3.8).
Cooling System
The data center is cooled separately from the remainder of the building by a
chilled-water system. The system consists of two Trane air-cooled chillers, a 40 ton
scroll chiller, and a 100 ton rotary chiller. The nominal efficiencies of the chillers are
1.1 and 1.3 kW/ton, respectively.1 The 100 ton chiller is served by the emergency
distribution panel (EDP) and is the primary chiller, though the 40 ton chiller is often
run in unison to ensure a sufficient supply of chilled water. The chilled-water pumps
are 1.5 hp (hydraulic horsepower; brake horsepower unlisted) variable-speed pumps
with control based on a differential pressure setpoint. A controlled bypass ensures
minimum flow through the chillers. The chilled-water system branches off into two
feeds, one which is dedicated to the data center, and the other which feeds the
computer labs.
1. Converted from the energy efficiency ratio (EER) listed on the equipment schedules. The
schedule for the 100 ton chiller was incomplete and, therefore, its EER was assumed to be
the same as the identical model chillers that are installed for data center 6.2. The nominal
loads are based on entering evaporator water temperature of 56°F, leaving evaporator
water temperature of 44°F, entering condenser air temperature of 95°F, and flow rates of
80 gpm and 200 gpm.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
1. These were measured using an Elite power measuring instrument, an ultrasonic flow meter
for pipe flow, and thermistors inserted in the Pete’s plugs at the inlet and outlet of the
chilled-water line.
2. The numbering refers to the numbering physically on the units. (CRU #1, CRU #2, CRU
#3). This does not correspond with the numbering on the equipment schedule based on the
anticipated motor kW.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Lighting
Lighting in the data center consists of T-8 tubular fluorescent lamps. All lights
were on when taking power measurements.
1. These measurements were taken by measuring pressure drop across the circuit setter on the
chilled water line and by measuring temperatures at Pete’s Plugs on the supply and return
lines.
2. These measurements were made at the main branch that feeds only these units. Chilled-
water temperatures were performed by inserting thermistor probes between insulation and
the pipe surface. Flow measurements were made using an ultrasonic flowmeter.
3. Airflow was taken by multiplying the average velocity across the return grille with the
grille area, where the velocity was taken with a Shortridge velocity grid.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Chiller pumps—proportioned
Spot 8/21/02 1.99 kW
Based on data center load
1. Individual chiller kW proportioned based on the data center cooling load versus total chiller load. This value
will vary when the chiller load changes, even if the data center load stays constant, as the efficiency of the chiller
is not constant.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Lighting 1.16 1%
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
electrically active floor area, the resulting computer load density in W/ft2 is consis-
tent with what facility engineers use, though this is different from the “footprint”
energy density that manufacturers use. We have also calculated the W/ft2 based on
the rack area alone. In addition to the previous metrics, the noncomputer energy
densities are calculated, based on the data center area. Using the data center occu-
pancy,1 the computer load density at 100% occupancy is projected (see Table 3.11).
The computer load density based on the data center area (gross area) is 65 W/
ft2 (699.6 W/m2). At full occupancy, the computer load density is projected to be 81
W/ft2 (871.9 W/m2). The computer load density based on rack area is presently 246
W/ft2 (2647.9 W/m2) and is projected to be 307 W/ft2 (3305 W/m2) at full occu-
pancy. The average computer load based on the number of racks is currently 1.3 kW/
rack and is projected to be 1.6 kW/rack at full capacity. The noncomputer energy
density, which includes HVAC, lighting, and UPS losses, is measured at 23 W/ft2
(247.6 W/m2).
Since the rack density within data centers and computer types are site specific,
a more useful metric for evaluating how efficiently the data center is cooled can be
represented as a ratio of cooling power to computer power. The theoretical cooling
1. A qualitative assessment of how physically full the data center is. In this facility, occupancy
was determined by a visual inspection of how full the racks in place were.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
load is the same as the sum of the computer loads and lighting loads (together, the
plug loads). (There is a small amount of human activity; however, the energy load
is insignificant compared to the computer loads.) This is a good cross check of
measurements and may also be an indication of the level of cooling that is provided
by non-data-center-dedicated cooling equipment (i.e., general office building or
“house” air to achieve minimum ventilation). The more traditional metrics of energy
per ton of cooling (kW/ton) are calculated for total HVAC efficiency (chillers,
pumps, and air handlers) and for the chillers. The air-handler efficiency is based on
how much air is actually being moved for the measured power consumption.
Table 3.12 shows that the cooling efficiency is 0.3 kW/kW. This, however, is based
on a cooling load that is below the theoretical cooling load by 30%, which suggests that
significant cooling is achieved by the whole-building cooling system (package units).
The efficiency and operation of this system was not evaluated. However, the whole-
building system has the ability to provide cooling by supplying outdoor air when the
weather is favorable (i.e., economizing), a very efficient way to provide cooling.
The average chiller efficiencies are slightly better than the design efficiencies,
which are at ARI conditions. This is expected since the ARI conditions assume 95°F
(35°C) air temperature entering the condenser, which is higher than the average
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
temperatures experienced during the monitored period. When outdoor air tempera-
tures are below this temperature, the chiller can reject energy more easily, and there-
fore has lower power consumption. Based on the outdoor air conditions in this area,
better efficiencies are expected. For every 1°F (0.6°C) drop in condenser tempera-
ture (outdoor air temperature), the chiller should experience an approximate 2.5%
increase in efficiency. In addition, their performance is poor compared to the perfor-
mance of typical water-cooled chillers. This area is certainly an area of opportunity
for energy savings in future construction.
The air-handler airflow delivery efficiencies were measured at 1367, 1375, and
1387 cfm/kW, which are below the design efficiencies by 40%–60%. This is likely
caused by increased pressure drop in the existing ductwork, which results in a decrease
in airflow, compared to the standard conditions under which fans are tested. Low pres-
sure drop duct design is important for achieving high air movement efficiencies.
UPS 1
UPS 2
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
event of a power loss, a 750 kW diesel generator provides power for approximately
ten hours.
Spot power measurements were taken at the UPS at both the input and output
in order to determine computer plug loads as well as losses at the UPS system.
Note, the UPS efficiencies at data center 6.2 are slightly higher than the effi-
ciency measured for the UPS serving data center 6.1.
Cooling System
The data center is cooled by a chilled-water system that consists of two 220
ton Trane rotary air-cooled chillers. The nominal efficiencies of the chillers are 1.3
kW/ton.1 The chillers are piped in parallel, and both typically operate at all times.
The EDP serves one of the chillers. The chilled-water pumps are 8.5 hp (hydraulic
horsepower) constant speed pumps. One main pipe feeds the cooling loads on each
floor; however, the data center is the last load fed by the main pipe.
As with data center 6.1, power consumption, flow, and chilled-water tempera-
tures2 were measured at each chiller over a period of several days to determine the
chiller efficiency over a period of varying temperatures.
Unlike the other data center, the chilled water feeds FCUs in the ceiling
plenum, which supplies the overhead duct system. The FCUs are constant speed
and have three-way valves. The system consists of a total of seven FCUs with cool-
ing capacities ranging from 104,000–190,000 Btu/h (30.5–55.7 kW) and design
airflow ranging from 5300–9600 cfm (271.8–150 m3/min). Air is returned through
grills in the ceiling. Minimum outdoor air is brought in through the house air-
conditioning system. As with data center 6.1, there is no humidity control in data
center 6.2.
The total chilled-water load to all the FCUs was monitored using the technique
of measuring flow rate and pipe surface temperatures.3 As with the previous data
center, it was necessary to identify the load solely to the data center in order to segre-
gate the chiller power consumption due to cooling of the data center only. The
number and arrangement of the FCUs did not allow for measurement of individual
fan-coil cooling load or air-supply flow rate.
The spot measurements and average of trended measurements are listed in Table
3.14 below. The chiller pump and chiller power are proportioned to the data center
cooling load in order to properly determine the electrical end use in the data center.
1. Based on 420 gpm, entering and leaving chilled-water temperatures of 56°F and 44°F,
respectively, and an entering condenser-water temperature of 95°F.
2. These were measured using an Elite power measuring instrument, an ultrasonic flowmeter
for pipe flow, and thermistors inserted in the Pete’s plugs at the inlet and outlet of the
chilled-water line.
3. These measurements were made at the main branch that feeds only these units. Chilled-
water temperatures were performed by inserting thermistor probes between insulation and
the pipe surface. Flow measurements were made using an ultrasonic flowmeter.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Lighting
Lighting in the data center consists of T-8 tubular fluorescent lamps, and all
lights were on when taking power measurements.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
(1022.6 W/m2). This requires approximately 40 more tons of cooling, which based on
the average measured chiller load, cannot be met by the chillers. The computer load
density, based on rack area, is presently 276 W/ft2 (2970.8 W/m2) and is projected to be
551 W/ft2 (5930.9 W/m2) at full occupancy. The average computer load, based on the
number of racks, is currently 1.4 kW/rack and is projected to be 2.9 kW/rack at full
capacity. The non-computer energy density, which includes HVAC, lighting, and UPS
losses, is measured at 33 W/ft2.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Commensurate with data center 6.1, the energy efficiency metrics for data
center 6.2 are shown in Table 3.17.
Table 3.17 shows that the cooling efficiency of approximately 0.6 kW/kW is
significantly less efficient than the cooling efficiency for data center 6.1. This is
explained by the differences in equipment, but the comparison is entirely valid, since
data center 6.1’s metrics suggest that significant cooling is provided by the whole-
building air-conditioning system. This does not appear to be the case with data center
6.2, where the measured cooling load is more than 10 tons larger than the theoretical
cooling load.1
The performance of the chillers is similar to what was observed in data center
6.2 (i.e., the performance was slightly better than the ARI-rated performance, which
is expected for the operating conditions). However, the performance of water-cooled
chillers far outweighs the performance of these units and is an opportunity for energy
savings in future construction.
The design efficiencies of the FCUs are comparable to the design efficiencies
of the AHUs used in data center 6.1, although the actual efficiencies were not
measured.
1. This is attributed to measurement error of the cooling load and the fact that computer loads
were assumed to be constant, while they actually may vary a small percent over time. This
assumes no other FCUs on the first floor serve non-data center rooms, which would
explain the small difference.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
pump power is reduced by the cube of the reduction in pump speed, which is
directly proportional to the amount of fluid pumped.
A primary-only variable pumping strategy must include a bypass valve that
ensures minimum flow to the chiller, and the use of two-way valves at the AHUs in
order to achieve lower pumping speeds. The control speed of the bypass valve should
also meet the chiller manufacturer’s recommendations of allowable turndown, such
that optimum chiller efficiency is achieved.1 Figure 3.11 describes the primary-only
variable-speed pumping strategy.
Air Management. The standard practice of cooling data centers employs an
underfloor system fed by CRACs. One common challenge with underfloor supply is
that the underfloor becomes congested with cabling, which increases the resistance
to airflow. This results in an increase in fan energy use. A generous underfloor depth
is essential for effective air distribution (we have seen 3 ft [0.9 m] in one facility).
An alternative to an underfloor air distribution system is a high-velocity over-
head supply combined with ceiling height returns. Such systems can be designed
efficiently if care is taken to keep air pressure drops to a minimum. In most cases,
1. This means that the flow through the chiller should be varied slow enough so that the
chiller is able to reach a quasi-steady-state condition and able to perform to its maximum
efficiency.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
duct design to accommodate air-side economizers is also simpler with central air-
handling systems.
Another common problem identified with CRACs is that they often fight each
other in order to maintain a constant humidity setpoint. Humidity control systems
should be designed to prevent such fighting; this is relatively simple with large air
handlers serving a single space but can also be accomplished by controlling all
CRACs in unison.
Air Management—Rack Configuration. Another factor that influences cool-
ing in data centers is the server rack configuration. It is more logical for the aisles
to be arranged so that servers’ backs are facing each other and servers’ fronts are
facing each other. This way, cool air is drawn in through the front, and hot air blown
out the back. The Uptime Institute has published documents describing this method
for air management.1 Our observations of both data centers showed an inconsistent
rack configuration.
1. http://www.upsite.com/TUIpages/whitepapers/tuiaisles.html.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
commissioning actually begins at the design stage, where the design strategy is crit-
ically reviewed. Either the design engineer serves as the commissioning agent or a
third-party commissioning agent is hired. Commissioning is different from standard
start-up testing in that it ensures systems function well relative to each other. In other
words, it employs a systems approach.
Many of the problems identified in building systems are often associated with
controls. A good controls scheme begins at the design level. In our experience, an
effective controls design includes (1) a detailed points list with accuracy levels and
sensor types and (2) a detailed sequence of operations. Both of these components are
essential to successfully implement the recommended high-efficiency chilled-water
system described above.
Though commissioning is relatively new to the industry, various organizations
have developed standards and guidelines. Such guidelines are available through orga-
nizations, such as the Portland Energy Conservation Inc. (www.peci.org) or the Amer-
ican Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (ASHRAE
Guideline 1-1996 and ASHRAE Guideline 0-2005, The Commissioning Process).
Lighting Controls
The lighting power and lighting power densities for data center 6.2 were more
than twice those of data center 6.1. This is likely a result of occupants/engineers
entering the data center and turning the lights on. Lighting controls, such as occu-
pancy sensors, may be appropriate for these types of areas that are infrequently or
irregularly occupied. If 24-hour lighting is desired for security reasons, scarce light-
ing can be provided at all hours, with additional lighting for occupied periods.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Chiller Staging. Currently, both chillers are running most of the time, regard-
less of the load. It would be more efficient to stage the chillers so that the smaller
chiller comes on when the larger chiller is unable to satisfy the cooling requirements.
This staging could be based on the primary chiller being unable to meet its chilled-
water setpoint. The measured data showed that the load did not exceed 90 tons and,
therefore, the large chiller should be capable meeting the load most of the time.
Attention should be paid to how quickly flow is diverted from the primary chiller so
that it does not go off inadvertently on low load.
Triple-Duty Valves. Triple-duty valves have been installed on the discharge of
each of the chilled-water pumps. It was recommended that the triple-duty valves be
opened completely.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Best Practices
The following information collects the best ideas in data center cooling litera-
ture over the last five years (Schmidt and Iyengar 2007), including best practices
from some of the case studies presented in this book. A review of literature prior to
2002 can be found in Schmidt and Shaukatullah (2003), and a discussion of this topic
was presented by Schmidt et al. (2005d). Belady and Beaty (2005) provide a broad
high-level framework for data center cooling roadmaps. Beaty et al. (2005a, 2005b)
present a two-part study that covers important aspects of data center cooling design,
such as load calculation, space planning, cooling-system choices, and modifying
legacy low heat flux facilities to allow for high density computer equipment clusters.
Belady and Malone (2006) report projected heat flux and rack power information for
the future to complement the work of ASHRAE (2005).
The different topics covered herein include data center new-building design,
accomodating future growth, raised-access and non-raised-access floor designs,
localized rack cooling, and energy management and efficiency.
When building a new data center, basic cooling concepts need to be evaluated
before one selects that which best fits the needs of the IT customer and accommo-
dates future growth. With this in mind, basic questions need to be answered before
one proceeds to outline the structural requirements of a new data center.
1. Of the numerous potential ventilation schemes, which is best suited for air cool-
ing datacom equipment?
2. If a raised-access floor is planned, what underfloor plenum height should one
choose to allow proper distribution of airflow while being mindful of construc-
tion costs?
3. What ceiling height is optimum for both raised-access (underfloor supply) and
non-raised-access floor (overhead supply) data centers?
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
4. Where should one place cabling trays and piping under a raised-access floor to
minimize airflow distribution problems?
5. How should one design for future growth and an increase in datacom equipment
power, which increases cooling needs?
6. Where should CRACs be placed for most efficient use and best cooling of IT
equipment?
These are basic questions that need to be answered before the planning process
continues. This section attempts to outline the best thinking on these issues in the
industry.
Nakao et al. (1991) numerically modeled representative geometries for four data
center ventilation schemes: underfloor supply (raised-access floor) with ceiling
exhaust, overhead supply with underfloor exhaust, underfloor supply with horizontal
exhaust, and overhead supply with horizontal exhaust. The heat flux modeled was for
61.3 W/ft2 (660 W/m2 ) with 80%–220% chilled-air supply fractions of total rack
flow rate. Noh et al. (1998) used CFD modeling to compare three different designs
for the data center—underfloor supply (raised-access floor) with ceiling exhaust,
overhead supply with underfloor exhaust, and overhead supply with horizontal (wall)
exhaust—using 5–6 kW racks that provided heat fluxes of 37.1 W/ft2 (400 W/m2 )
for telecommunications applications. Shrivastava et al. (2005a) used numerical CFD
modeling to characterize and contrast the thermal performance of seven distinct
ventilation schemes for data center air cooling, as illustrated in Figure 4.1. In a
continuation of this work, Shrivastava et al. (2005b) used statistical significance
levels to quantify the effect of three variables—ceiling height, chilled-air supply
percentage, and return vent location—on the facility thermal performance for the
seven designs shown in Figure 4.1. Sorell et al. (2005) also used CFD to compare the
non-raised-access floor (overhead supply) design with the raised-access floor (under-
floor) designs for air delivery. Herrlin and Belady (2006) and Schmidt and Iyengar
(2007) have also used CFD methods to compare underfloor and overhead air supply
designs, respectively.
Furihata et al. (2003, 2004a, 2004b) and Hayama et al. (2003, 2004) developed
an air-conditioning flow method that reduces the volume of supplied air while main-
taining proper cooling for the computer equipment, and they established an airflow
adjustment mechanism design methodology for proper distribution of supplied air
for air conditioning. They monitored the temperature of the exit air from racks and
controlled the flow into the rack to maintain similar outlet temperatures for all the
racks. This method required a flow-control mechanism be added to the bottom of the
racks. Spinazzola (2003) discussed a specialized cooling configuration whereby air
is ducted in and out of the rack via the intake and exhaust plenums, and the server
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
equipment is designed to higher air temperature rise through the server, resulting in
energy savings with respect to CRAC capacity.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
vents at the floor or at the bottom part of the walls (see Figure 4.1c)
(Shrivastava et al. 2005a; Nakao et al. 1991).
• The typical underfloor supply design (see Figure 4.1a) can result in hot
spots at the very top part of the rack inlet due to hot-air recirculation
patterns (Schmidt 2001; Sorell et al. 2005; Herrlin and Belady 2006;
Furihata et al. 2003; Karlsson and Moshfegh 2003). This does not occur
in overhead supply designs (see Figures 4.1b–4.1c) where chilled air is
supplied from the top leading and well mixed at the top of the rack
(Sorell et al. 2005, Herrlin and Belady 2006, Furihata et al. 2003).
• Of three variables, the chilled airflow supply percentage, the ceiling height,
and the return hot-air vent location, the chilled airflow supply fraction had the
biggest influence on rack inlet temperatures for a variety of ventilation
schemes (Shrivastava et al. 2005b).
• Steep termperature gradients at the front of that rack can occur with high
server density layout with underfloor chilled-air supply (Sorell et al. 2005
Schmidt and Iyengar 2007). This phenomenon is sometimes less pronounced
for the same rack layout if an overhead design is employed (Sorell at al.
2005).
• Directing hot exhaust air from equipment and cabinets upward into a ceiling
return plenum may be superior to simply having a high ceiling (Beaty and
Davidson 2005).
• If flexibility exists in the orientation of rows of equipment, a layout that
allows hot air unobstructed access to the return of CRACs (or other cooling
system returns) should be superior to a layout with rows perpendicular to the
CRACs. Enhancing the natural flow of the exhaust air from point A to point B
should reduce the recycle potential for the warmest air (Beaty and Davidson
2005).
• Cold-aisle/hot-aisle arrangement should be followed in laying out racks
within a data center; the fronts of the racks drawing in chilled air from either
overhead or from the raised-access floor should face the chilled air exhausting
into the cold aisle (Beaty and Davidson 2005, Beaty and Schmidt 2004, Beaty
and Davidson 2003, ASHRAE 2003).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
access floors to provide an answer. The key question is what height provides the great-
est flexibility for making adjustments to the airflow as datacom equipment is moved
around the data center and newer datacom racks are brought in and others, possibly
lower-powered racks, are moved out. The two parameters of the raised-access floor that
affect the flow distribution and the ability to adjust the flow throughout the floor are the
raised-access floor height and the percentage opening of the tiles on the raised-access
floor. The plenum height has significant influence on the horizontal velocity and pres-
sure distribution in the plenum. As the plenum height increases, the velocities decrease
and the pressure variations diminish, leading to a more uniform airflow distribution.
This can be shown analytically with the Bernoulli equation. To illustrate the effect of
plenum height, Karki et al. (2003) performed an analysis on a base configuration
shown in Figure 4.1a. When the raised-access floor height is not very high (6–12 in.
[0.15–0.3 m]), the flow from the perforated tiles nearest the CRAC is very low, and in
some cases reverse flow occurs. Thus, datacom equipment cannot be placed in this area
and be adequately provided with chilled air. However, as the height of the raised-access
floor increases (up to 30 in. [0.76 m]), the distribution of flow across the tiles becomes
much more uniform and the reverse flow near the CRAC is eliminated. The simulations
by Karki et al. (2003) were for tiles that were 25% open.
So what happens when the perforated tiles are more open, and will this allow
the raised-access floor height to decrease? The variation in tile flow rates becomes
much larger as one increases the perforated openings. If one desires a uniform distri-
bution of air from perforated tiles because all the datacom equipment residing adja-
cent to these tiles are similar, then one can vary the perforated tile openings.
Common perforated tiles typically have open free area ratios of 6%, 11%, 25%, 40%,
and 60%. To make the flow distribution uniform, it is necessary to encourage the
flow near the CRAC and discourage it at positions away from the CRAC.
Recommendations/Guidelines
• For low raised-access floors (6–12 in. [0.15–0.3 m]), do not place datacom
equipment close to CRACs since low airflow or reverse flow can occur from
the perforated tiles.
• Airflow from a large number of perforated tiles can be made uniform if perfo-
rated tiles of varying percent openings are distributed so that some areas have
higher percent openings to encourage more flow, while some areas have less
percent openings to discourage flow.
• Partitions can be placed underneath the raised-access floor to direct air into
the desired areas.
• Modeling suggests that raised-access floors should be designed to allow a free
flow height of at least 24 in. (0.61 m); if piping and cabling take up 6 in., then
the raised-access floor height should be 30 in. (0.76 m) (Patankar and Karki
2004; Beaty and Davidson 2005). A large underfloor depth of 24 in. (0.61 m)
was also recommended by VanGilder and Schmidt (2005).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
• When the supply chilled air from the perforated tiles exceeds the rack flow rates
(110%), increasing the ceiling height reduces the datacom equipment intake
temperatures for three cases: (1) underfloor chilled-air supply and room CRAC
hot-air return with a hood at the top of the CRAC, (2) underfloor air supply and
ceiling hot air return that vents to the CRAC, and (3) overhead air supply and
room CRAC hot-air return at the bottom of the CRAC (Sorell et al. 2006).
• For flows from the perforated tiles equal to or less than the datacom equip-
ment flow rates, increasing the ceiling height can result in increased inlet tem-
peratures for underfloor air distribution. A hot recirculation cell intensifies
over the datacom equipment with increased height (Schmidt 2001).
• Increasing the ceiling height from 9 ft (2.74 m) to 12 ft (3.65 m) reduces the
rack inlet temperature by as much as 11°F–22°F (6°C–12°C) in hot spot
regions and has small impact (inconsistent) in lower flux regions for 6 ft (1.8
m) tall racks arranged in a hot-aisle/cold-aisle fashion on a raised-access floor
(Shrivastava et al. 2005).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ANSI/TIA-942 Recommendations/Guidelines
• Perforated access floor tiles should be located in the cold aisles rather than in
the hot aisles to improve the functioning of hot and cold aisles.
• No cable trays or other obstruction should be placed in the cold aisles below
the perforated tiles.
• Telecommunications cabling under the access floor shall be in ventilated
cable trays that do not block airflow. Additional cable tray design consider-
ations are provided in ANSI/TIA-569-B (TIA 2003).
• Underfloor cable tray routing should be coordinated with other underfloor
systems during the planning stages of the building. Readers are referred to
NEMA VE 2-2001 (NEMA 2001) for recommendations regarding installation
of cable trays.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
• If possible, chilled-water pipes and cabling should be kept away from the
exhaust of the CRACs (Schmidt et al. 2004).
• Underfloor blockages have the biggest influence on flow rate uniformity
through the perforated tiles (VanGilder and Schmidt 2005).
• Blockages that are parallel to the hot and cold aisles have much lower impact
than those that run perpendicular to the aisle lengths in cases where CRACs
are located parallel to the computer rack equipment aisles (Bhopte et al.
2006).
• Blockages occurring under the cold aisle have the effect of reducing perfo-
rated tile flow rates (Bhopte et al. 2006).
Recommendations/Guidelines
• If flexibility exists in the placement of the CRACs, place them facing the
hot aisle rather than cold aisles, as the underfloor velocity pressure should
be minimized in cold aisles (Beaty and Davidson 2005; Schmidt and Iyen-
gar 2005).
• If CRACs are aligned in parallel rows on a raised-access floor, then each row
of CRACs should exhaust air in a direction that increases the static pressure
across the floor rather than a way in which their plumes collide, causing
decreased static pressure in these regions and overall loss of chilled air to the
raised-access floor (Koplin 2003).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Racks that have a clear path of hot air back to the intakes of the CRACs gen-
erally show low rack air temperatures (Beaty and Davidson 2005; Schmidt
and Iyengar 2005).
• Separating the cold air and hot exhaust air are key to improved energy effi-
ciency of the ventilation system.
• For better temperature control of the air inlet to the IT equipment , the CRACs
should be controlled on outlet air from the CRACs and not inlet air returning
from the racks.
• Airflow rate distribution in the perforated tiles is more uniform when all the
CRACs discharge in the same direction, and distribution is poor (nonuniform)
when the CRACs discharge air such that the air streams collide with each
other (Schmidt et al. 2004).
• Turning vanes and baffles appeared to reduce the CRAC airflow rate by about
15%. It is thus preferable that turning vanes (scoops) not be used in CRACs
(Schmidt et al. 2004). However, when turning vanes are used on CRACs fac-
ing each other, their orientation should be such that the airflow from the
CRACs are in the same direction (Schmidt et al. 2004).
• Integrating sophisticated thermal instrumentation and control of a data center
environment with the operation parameters of CRACs (Boucher et al. 2004;
Bash et al. 2006) (e.g., volumetric airflow rate or chilled-air setpoint tempera-
ture) can result in significant energy saving of around 50% (Bash et al. 2006).
VFD can be used to change fan speeds and, thus, CRAC airflow rates, and the
chilled-air setpoint temperatures can be changed by controlling the condenser
conditions of the CRAC (Boucher et al. 2004).
Recommendations/Guidelines
• Develop phased plans for the installation of mechanical and electrical equip-
ment, matching cooling and electrical infrastructure to IT requirements, and
thereby only incurring infrastructure cost when required (Kurkjian and Glass
2004; Beaty and Schmidt 2004).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• The maximum capability of the infrastructure that will accommodate the last
phase installation of datacom equipment will determine the corresponding
sizes of the mechanical and electrical rooms (Kurkjian and Glass 2004).
• Place capped valves in those locations where CRACs can be installed to sup-
port future IT equipment (Kurkjian and Glass 2004).
• Capped valves can also accommodate future computing equipment technolo-
gies that require direct connection of chilled water through a heat exchanger
to a secondary loop that cools the electronics. The heat transfer mediums for
cooling the electronics could be water, refrigerants, fluroinerts, or other cur-
rently undetermined mediums for cooling (Kurkjian and Glass 2004; Beaty
and Schmidt 2004).
• There are three advantages to placing the CRACs and piping in corridors
around the perimeter of the IT equipment room (Kurkjian and Glass 2004):
(1) maintenance of the CRACs is removed from the data center, (2) piping is
installed in the corridor and not on the main floor so that any valves that need
to be exercised are not inside the data center, and (3) future CRAC installa-
tions are limited to the corridor and do not intrude on the data center floor.
• Eliminate reheat coils and humidification from the local CRACs and incorpo-
rate those in the central AHUs (Kurkjian and Glass 2004).
• New data centers constructed completely inside the building (no common
exterior wall) with vapor barriers provided on the data center walls minimize
the effect of the outdoor environment (Kurkjian and Glass 2004).
• To minimize data center construction, typically all piping systems are
installed to meet future design loads (Kurkjian and Glass 2004; Beaty and
Schmidt 2004). Two approaches provide fault-tolerant piping systems: (1)
dual chilled-water supply and return to all air conditioners, which makes com-
pletely redundant supply and return piping systems available to all equipment
in the event of a failure and (2) install chilled-water supply and return piping
loops around all of the CRACs where isolation valves are used that permit all
equipment to be fed from either direction of the loop.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
• Perforated tiles (and, thus, racks) should not be located very close to the
CRACs. Tiles can draw air from the room very close to the CRACs due to
localized high velocities and, therefore, low static pressures caused by the
Bernoulli effect (Schmidt et al. 2004).
• Airflow rate uniformity through the perforated tiles can be achieved by using
restrictive tiles (e.g., 25% open), minimizing blockages, increasing plenum
depth, reducing leakage flow, and using a hot-aisle/cold-aisle configuration
(VanGilder and Schmidt 2005).
• While restrictive tiles can improve tile flow uniformity, this is not recom-
mended in hot spot areas where it is beneficial to supply as much chilled air as
possible to the cold aisle.
• The hot-aisle/cold-aisle arrangement is a good design, but the rows of more
than ten tiles should be avoided when there is a CRAC on one end and a wall
on the other (VanGilder and Schmidt 2005).
• If the seams in the floor tiles are sealed, then distributed air leakage can be
reduced by 5%–15% in a data center (Radmehr et al. 2005).
• Dampers should not be used in perforated tiles and, if present, they should be
removed. The damper element can move over time, and setting the position of
the damper can be problematic. It is much better to have perforated tiles with
different percentages of open area to allow optimizing the ventilation of a
raised-access floor.
• Unused cable openings should be closed since they allow supply air to go
where it is not needed. If these openings are large or frequent enough, they
also allow the static pressure to bleed from the raised-access floor plenum
(Beaty and Davidson 2005).
• The more maldistributed the flow exiting the perforated tiles along a cold
aisle, the lower are the average rack inlet temperatures bordering the aisle.
For higher tile flows, the maldistribution did not have as large an effect at the
highest-powered rack locations (Schmidt and Cruz 2003a).
• Where high-powered racks draw from multiple perforated tiles, inlet air tem-
peratures can be maintained within temperature specifications (Schmidt and
Iyengar 2005).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• If the hot aisle is too hot for servicing, then a limited number of perforated
tiles can be placed in the hot aisle to encourage thermal dilution (Beaty and
Davidson 2005; Schmidt and Iyengar 2005).
• Inlet air temperatures increased as more chilled air shifted to the hot aisle.
The most efficient use of the chilled air is to exhaust it in the cold aisle such
that it washes the fronts of the racks (Schmidt and Cruz 2002a).
Recommendations/Guidelines
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
• In order to maintain a given inlet temperature, the chilled airflow rate exhaust-
ing from the perforated tiles should increase with increased rack flow rate
(Schmidt and Cruz 2003b).
• Decreasing rack flow rates increased rack air inlet temperatures. Higher rack
flow tends to increase the mixing in the data center, thereby lowering the air
that is drawn into the racks (Schmidt and Cruz 2003b).
• Conditions necessary and sufficient to meet rack inlet temperatures for high-pow-
ered racks (>10 kW) with moderate (18°F–36°F [10°C–20°C]) rack air tempera-
ture rise: (1) perforated tiles immediately in front a rack must supply one-quarter
to one-half of the rack flow rate and another ~25% supply from the cable cutout
openings (Schmidt 2004; Schmidt et al. 2005a, 2006), and (2) CRAC capability
in the region of the racks must be equal to or greater in capacity to handle local-
ized hot spot rack heat load (Schmidt 2004; Schmidt et al. 2005a, 2006).
• For data centers with low-powered racks (1–3 kW/rack), for fully populated
racks, a chilled-air supply of 50% is sufficient to meet inlet air temperatures
(Furihata et al. 2004a).
• When the rack air temperature rise is greater than 36°F (20°C) for high-
powered racks (>10 kW), it is anticipated that closer to 100% chiller air
supply fraction will be needed.
• To eliminate the recirculation of hot air from the rear of a rack over the top of the
rack and into the front of the rack near the top, the front cover can be designed to
restrict air draw into the rack to only the bottom portions of the rack (Wang 2004).
• The IT industry provides guidelines for airflow within a rack: front to back, front
to top, and front to back and top (Beaty and Davidson 2003; ASHRAE 2004).
Recommendations/Guidelines
• Wider cold aisles increase chilled air to the servers and lower the velocity
exiting the tiles, thereby eliminating a potential blow by of high-velocity
chilled air (Beaty and Davidson 2005).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Recommendations/Guidelines
• Placing the cooling near the source of heat shortens the distance the air must
be moved. This increases the capacity, flexibility, efficiency, and scalability of
the cooling systems (Baer 2004; Schmidt et al. 2005b; Heydari and Sabounchi
2004).
• Placing liquid-cooling units at the end of rows can eliminate or reduce the
wrap around effect of hot air from the rear of racks.
• Localized rack cooling can more closely match the rack power load and
thereby greatly improve the overall data center efficiency.
Recommendations/Guidelines
• Use ~30° diffusers if you want to reduce the temperatures at the top of the
racks, and do not use diffusers (blowing air straight down) if the temperatures
at the bottom of the racks need to be cool (Iyengar et al. 2005).
• Using the air supply diffuser close to the top of the racks helps the bottom of
the racks, and a larger clearance helps the tops of the racks (Iyengar et al.
2005).
• High rack heat loads can mean high rack flow rates and better mixing; this can
sometimes reduce the inlet air temperature to the rack (Iyengar et al. 2005).
• The air at the top of the rack is usually hotter than at the bottom, even though
this effect is not as pronounced as underfloor air supply designs (Sorell et al.
2005; Iyengar et al. 2005).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
with server workload allocation to allow the data center to function at its most energy
efficient. Patel et al. (2002) studied the impact of nonuniformity in rack heat load
distribution on the energy efficiency of the air-cooling units with an emphasis on
CRAC load balancing, rack and CRAC layout optimization, and CRAC sizing.
White and Abels (2004) discuss energy management via software-based algorithms
in a dynamic virtual data center. Herold and Rademacher (2002) described a natural
gas turbine power driven data center that incorporates waste heat recovery using
absorption (ASHRAE 2006) chillers. CRAC thermal parameter control proposed by
Boucher et al. (2004) and Bash et al. (2006) are discussed in a preceding CRAC
configuration section.
More recently, a comprehensive design guide provided by the Pacific Gas and
Electric Company (PG&E 2006) and developed by Rumsey Engineers and research-
ers at LBNL in California, resulted in a set of guidelines for energy efficient data
center design. Some of these best practices are listed below:
• Using centralized air handlers means using larger size, more efficient fans.
• Optimize a refrigeration plant using higher building chiller water temperature
setpoints, variable flow evaporators and staging controls, lower condenser-
water temperature setpoints, high-efficiency VFDs in chiller pumps, and ther-
mal storage units to handle peak loads.
• Water-cooled chillers can offer significant energy savings over air-cooled
chillers, particularly in dry climates. Among the options of water-cooled chill-
ers, variable-speed centrifugal are the most energy efficient.
• Variable-speed fans on cooling towers allow for optimized tower control.
• Premium efficiency motors and high-efficiency pumps are recommended.
• Localize liquid cooling of racks to augment air-handling capabilities of exist-
ing cooling infrastructure.
• Use of free cooling via a water-side economizer to cool building chilled water
in mild outdoor conditions and bypass the refrigeration system.
• Humidity control is very energy intensive. It is also difficult to sustain due to
susceptibility to sensor drift. Waste heat in the return airstream can be used
for adiabatic humidification. A common control signal can be used to ensure
all CRACs are set to the same humidity setpoint.
• Use high-reliability generation units as the primary power source with the
grid as back up; also use waste-heat recovery systems, such as adsorption
chillers. This allows the elimination of backup power sources.
• Use high-efficiency UPS systems. For battery-based power backup, use as high
a load factor as possible, with at least 40% or higher of their rated capacity.
This may require the use of smaller battery-operated UPS systems in parallel.
• Use power conditioning to operate the system in the most line-efficient mode
for line-reactive systems.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ASHRAE recently published a book as part of the Datacom Series titled Best
Practices for Datacom Facility Energy Efficiency (ASHRAE 2008). This book
contains a number of additional recommendations related to energy efficiency, and
the reader is encouraged to consult it for further details.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Beaty, D., N. Chauhan, and D. Dyer. 2005a. High density cooling of data centers
and telecom facilities—Part 1. ASHRAE Transactions 111(1):921–31.
Beaty, D., N. Chauhan, and D. Dyer. 2005b. High Density Cooling of Data Cen-
ters and Telecom Facilities—Part 2. ASHRAE Transactions (111)1:932–44.
Bedekar, V., S. Karajgikar, D. Agonafer, M. Iyengar, and R. Schmidt. 2006. Effect
of CRAC location on fixed rack layout. Proceedings of the Intersociety Con-
ference on Thermal Phenomena (ITherm), San Diego, CA.
Belady, C., and C. Malone. 2006. Data center power projections to 2014. Proceed-
ings of the Intersociety Conference on Thermal Phenomena (ITherm), San
Diego, CA.
Belady, C., and D. Beaty. 2005. Data centers—Roadmap for datacom cooling.
ASHRAE Journal 47(12):52–5.
Bhopte, S., R. Schmidt, D. Agonafer, and B. Sammakia. 2005. Optimization of
data center room layout to minimize rack inlet air temperature. Proceedings of
Interpack, San Francisco, CA.
Bhopte, S., B. Sammakia, R. Schmidt, M. Iyengar, and D. Agonafer. 2006. Effect
of under floor blockages on data center performance. Proceedings of the
Intersociety Conference on Thermal Phenomena (ITherm), San Diego, CA.
Boucher, T., D. Auslander, C. Bash, C. Federspiel, and C. Patel. 2004. Viability of
dynamic cooling control in a data center environment. Proceedings of the
Intersociety Conference on Thermal Phenomena (ITherm), Las Vegas, NV, pp.
593–600.
ERA. 2003. European workplace noise directive. Directive 2003/10/EC, European
Rotogravure Association, Munich, Germany.
Flometrics. 1999. Flovent version 2.1. Flometrics Ltd., Surrey, England.
Furihata, Y., H. Hayama, M. Enai, and T. Mori. 2003. Efficient cooling system for
it equipment in a data center. Proceedings of the International Telecommuni-
cations Energy Conference (INTELEC), Yokohama, Japan, pp. 152–59.
Furihata, Y., H. Hayama, M. Enai, and T. Mori, and M. Kishita. 2004a. Improving
the efficiency of cooling systems in data centers considering equipment char-
acteristics. Proceedings of the International Telecommunications Energy
Conference (INTELEC), Chicago, IL, pp. 32–37.
Furihata, Y., H. Hayama, M. Enai, and T. Mori. 2004b. The effect of air intake for-
mat of equipment gives to air conditioning systems in a data center. IEICE
Transactions on Communications 87(12):3568–75.
Guggari, S., D. Agonafer, C. Belady, and L. Stahl. 2003. A hybrid methodology
for the optimization of data center room layout. Proceedings of the Pacific
Rim/ASME International Electronics Packaging Technical Conference and
Exhibition (InterPack), Maui, Hawaii.
Hamann, H., J. Lacey, M. O’Boyle, R. Schmidt, and M. Iyengar. 2005. Rapid 3-
dimensional thermal characterization of large scale computing facilities.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Leonard, P. 2005. Thermal bus opportunity—A quantum leap in data center cool-
ing potential. ASHRAE Transactions 111(2):732–45.
Meuer, H. 2008. Top 500 supercomputer sites. http://www.top500.org/.
Nakao, M., H. Hayama, and M. Nishioka. 1991. Which cooling air supply system
is better for a high heat density room: Underfloor or overhead. Proceedings of
the International Telecommunications Energy Conference (INTELEC), Kyoto,
Japan, pp. 393–400.
NEMA. 2001. Recommended practice for installing metal cable tray systems. Report
VE 2-2001, National Electrical Manufacturers Associations, Rosslyn, VA.
Noh, H., K. Song, and S.K. Chun. 1998. The cooling characteristic on the air sup-
ply and return flow system in the telecommunication cabinet room. Proceed-
ings of the International Telecommunications Energy Conference (INTELEC),
San Francisco, CA, pp. 777–84.
Norota, M., H. Hayama, M. Enai, and M. Kishita. 2003. Research on efficiency of
air conditioning system for data center. Proceedings of International Telecom-
munications Energy Conference (INTELEC), Yokohama, Japan, pp. 147–51.
PG&E. 2006. High performance data centers—A design guidelines sourcebook.
Report developed by Rumsey Engineers and Lawrence Berkeley National
Laboratory for the Pacific Gas and Electric Company, San Francisco, CA.
Patel, C., C. Bash, and C. Belady. 2001. Computational fluid dynamics modeling
of high compute density data centers to assure system inlet air specifications.
Proceedings of InterPACK 2001 Conference, Kauai, Hawaii.
Patel, C., R. Sharma, C. Bash, and A. Beitelmal. 2002. Thermal consideratons in
cooling large scale high compute density data centers. Proceedings of the
Intersociety Conference on Thermal Phenomena (ITherm), San Diego, CA.
Patel, C., C. Bash, R. Sharma, M. Beitelmal, and R. Friedrich. 2003. Smart cool-
ing of data centers. Proceedings of Interpack, Maui, Hawaii.
Patankar, S.V., and K.C. Karki. 2004. Distribution of cooling airflow in a raised
flow data center. ASHRAE Transactions :629-634.
Patterson, M., R. Steinbrecher, and S. Montgomery. 2005. Data centers: Compar-
ing data center and computer thermal design. ASHRAE Journal :38–42.
Radmehr, A., R. Schmidt, K. Karki, and S. Patankar. 2005. Distributed leakage
flow in raised floor data centers. Proceedings of Interpack, San Francisco,
CA.
Rambo, J., and Y. Joshi. 2003a. Multi-scale modeling of high power density data
centers. Proceedings of Interpack, Maui, Hawaii.
Rambo, J., and Y. Joshi. 2003b. Physical models in data center air flow simula-
tions. Proceedings of the ASME International Mechanical Engineering Expo-
sition and Congress (IMECE), Washington, DC.
Rambo, J., and Y. Joshi. 2005. Reduced order modeling of steady turbulent flows
using the POD. Proceedings of the ASME Summer Heat Transfer Conference,
San Francisco, CA.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Index
A chiller 45, 125–126, 139–143, 145,
AHU 66, 69, 71 147–149, 151–152, 154–155, 169, 172
airflow measurements 12, 28, 45, 62, CRAC 1, 3–5, 7–11, 18–21, 23–25, 28,
74, 78, 81, 116, 130 34–36, 38, 40, 42–44, 46–47, 54, 56,
62–64, 69, 72, 75, 80, 89, 94, 114, 116–
air-to-liquid heat exchangers 3, 4
118, 122, 124–125, 128, 135–137, 140,
air-to-water heat exchanger 88
152–153, 158–169, 171–172
ARI 145, 150
CRAH 75, 78, 80–81, 84–85, 89–90, 93
ASHRAE 19, 22, 35, 39, 47–48, 55, 62,
65, 69, 73–74, 81, 85, 93, 122, 124– D
126, 130, 136, 154, 157, 160, 168–169,
172–173 dampers 8, 46, 66, 71, 167
diesel generator 139, 147
B
E
baffles 21, 163
balometer 40 EDP 139, 147
Bernoulli effect 167 EMCS 154
blockages 8–9, 23, 40, 45, 48, 63–64, 115
F
BMS 76, 116
FCU 137, 147–148, 150, 154, 155
C
H
cable cut-outs 10, 12, 18–19, 21, 24,
34–35, 38, 40, 60, 62 humidification 28, 62, 166, 172
capped valves 166
CDU 98, 99 L
CFD 18, 23, 25, 40, 45, 54, 60, 95, 97, lighting 5, 10, 21, 28, 38, 43, 62, 76,
102, 105, 123, 125, 131, 133–134, 158, 127, 130, 135, 141, 143–144, 148–149,
163, 166, 171 154
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2008, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
186 ⏐ Index
P T
partitions 23, 25, 56, 161, 163–164, TCO 171
168, 170 temperature measurements 18, 35, 40,
PDU 5, 7–8, 10–12, 21, 76–77, 80–81, 47, 54, 62, 74, 81, 90–91, 116, 130, 133
84, 128, 135
U
perforated tiles 1, 3, 5, 12, 18–19, 21,
34, 36, 38, 56, 62, 64, 66, 69, 71–72, 80, UPS 76, 128, 135, 137, 139, 141, 143–
85, 86, 117–118, 161–166, 168–170 144, 146, 149, 172
R V
raised-access floor 1–6, 8–9, 12, 18–19, velometer 10, 18, 24, 34, 60
23, 34–35, 40, 42, 45–46, 56, 60, 62, VFD 116–117, 122, 126, 155, 165, 172
66, 69, 71, 74, 80, 85, 89, 94, 97, 99, VSD 154
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012