0% found this document useful (0 votes)
92 views32 pages

Energy and Digitalization: M.Sc. Sustainable Systems Engineering - Control and Integration of Grids

The document discusses energy efficiency in data centers. It begins by introducing the topic and outline, which includes where energy is dissipated and how efficiency can be increased through design, operation, and waste heat reuse. It then covers types of data centers like on-site, co-location, edge, and hyperscale facilities. Details are provided on data center tier classifications, power distribution systems, power requirements that can exceed 100MW, and the key metric of Power Usage Effectiveness (PUE) which is the ratio of total facility energy to IT equipment energy. Global average PUE is around 1.5-2 while some efficient data centers now achieve below 1.2.

Uploaded by

Danish Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views32 pages

Energy and Digitalization: M.Sc. Sustainable Systems Engineering - Control and Integration of Grids

The document discusses energy efficiency in data centers. It begins by introducing the topic and outline, which includes where energy is dissipated and how efficiency can be increased through design, operation, and waste heat reuse. It then covers types of data centers like on-site, co-location, edge, and hyperscale facilities. Details are provided on data center tier classifications, power distribution systems, power requirements that can exceed 100MW, and the key metric of Power Usage Effectiveness (PUE) which is the ratio of total facility energy to IT equipment energy. Global average PUE is around 1.5-2 while some efficient data centers now achieve below 1.2.

Uploaded by

Danish Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Energy and Digitalization

M.Sc. Sustainable Systems Engineering – Control and Integration of Grids


Dr. Stephan Schnez
Agenda

Target of this week‘s lecture:


• Where and why is the energy dissipated? Can we reduce the energy dissipation?
• Taking data centers as an example: How can the efficiency be increased by better design, intelligent operation and/or use of waste heat?

Tuesday, January 12th: Fundamentals Wednesday, January 13th: Data centers


• Motivation for a lecture on “Energy & Digitalization” • Types of data centers
• Moore’s law and Dennard scaling • Electrical layout
• Information and physics: Landauer’s principle • Power usage effectiveness
• Limits to the energy efficiency of CPUs • Cooling of the data center
• Summary • Reuse of waste heat
• Demand response with data centers
• Summary

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 2
Dr. Stephan Schnez
Energy and Digitalization – Data Centers
M.Sc. Sustainable Systems Engineering – Control and Integration of Grids
Dr. Stephan Schnez
Types of Data Centers
Wikipedia:
“A data center … is a building, dedicated space within a building, or a group of buildings used to house computer systems and associated components,
such as telecommunications and storage systems. … Since IT operations are crucial for business continuity, it generally includes redundant or backup components
and infrastructure for power supply, data communication connections, environmental controls (e.g. air conditioning, fire suppression) and various security devices. A
large data center is an industrial-scale operation using as much electricity as a small town.”

Onsite/enterprise data center Co-location data center


• Housed within a company site/ campus • One data center owner selling space, power and cooling to multiple
enterprise and hyperscale customers in a specific location
• Changed or expanded as the company needs it; easily accessibility for
general maintenance or troubleshooting • Offer interconnection to Software as a Service (SaaS) such as Salesforce, or
Platform as a service (PaaS) like Azure – enabler for enterprises to scale and
• Increase performance because the equipment is stored at the company grow their business with minimum complexity at a low cost
location – data protection security • Enterprises rent e.g. 1 to 100 cabinets; a colocation data center can house
• But: can be expensive and require a lot of resources. An onsite data center 100s of individual customers
needs a reliable power supply and cooling system, a large network, a
security system and more, all of which can be hard to support in-house. Hyperscale data center
• Typically from 10 cabinets/server racks upwards and as large as 40 MW+ • Can house thousands or millions of servers, occupies at least 1000 m 2
and can have 100 – 1000 MW
Edge data center • Owned and operated by the company it supports, e.g. AWS, Microsoft,
Google, Apple or Tencent, Alibaba, Huawei
• Placed near the areas they serve and are managed remotely • Necessary for cloud applications, big data storage and high-performance
• One of many in a complex network including a central enterprise data center computing (HPC)
• House mission-critical data, applications, and services for edge-based • Noticeable difference from Enterprise to Hyperscale is high fiber count for fast
connections utilized across the network
processing and storage, providing e.g. low latency, increased capacity

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 4
Dr. Stephan Schnez
Data Center Tier Classification System
Uptime Institute

https://uptimeinstitute.com/tiers

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 5
Dr. Stephan Schnez
Scheme of a Power Distribution System of a Data Center
Tier I (left) and Tier III (right)

E. Oró et al., „Energy efficiency and renewable energy integration in data centres. Strategies and modelling review “, link

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 6
Dr. Stephan Schnez
Power Requirements and Power Usage Effectiveness of a Data Center

L. Brochard et al., „Energy-Efficient Computing and Data Centers“, link;


numbers based on ASHRAE 2015-2020 server power and rack heat
load trends

• Modern server racks consume up to ~40 kW, single servers up to ~10 kW.
• Big data centers with >100‘000 servers consume more than 100 MW, even exceeding 1 GW of electrical power for the IT equipment.
• Most electricity is dissipated as heat and must be cooled away. The cooling equipment (fans, chillers etc.) contributes the most
important additional electrical consumer in addition to the IT equipment itself.
• Additional power requirements due to lighting, backup power supply, UPS etc.
• Power usage effectiveness (PUE): most common ratio to describe the efficiency of a data center

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 7
Dr. Stephan Schnez
Power Usage Effectiveness

Power usage effectiveness PUE: ratio of the total amount of energy used by a data center to the energy delivered to computing equipment

total facility energy IT equipment energy + non−IT equipment energy


𝑃𝑈𝐸 = =
IT equipment energy IT equipment energy

(Note: Sometimes power instead of energy is used. In most cases, only an average PUE over a relevant time scale is useful.)

PUE rating
1.0 ideal, but hypothetical
<1.2 very efficient
1.2 – 1.5 efficient
1.5 – 2.0 average
2.0 – 2.5 inefficient
>2.5 very inefficient

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 8
Dr. Stephan Schnez
Power Usage Effectiveness

Global average PUE Average PUE for all Google data centers

A. Lawrence (Uptime Institute), „Data center PUEs flat since 2013“, link Google Rechenzentren, „Effizienz“, link

Current world record (according to my knowledge): Cloud&Heat data center in Frankfurt with a PUE of 1.014 (link)

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 9
Dr. Stephan Schnez
Power Usage Effectiveness
Effect on Electricity Costs

PUE 1.6 1.1

Power of IT equipment at full load 100 MW

Average load 60%

Cost of electricity 180 €/MWh

Annual electricity consumption ~840‘000 ~580‘000 MWh/a

Annual cost of electricity ~150 ~100 M€/a

• Yearly savings of ~50 M€ in this particular example with lower PUE


– Co-location data centers with higher PUE pass these costs on to their customers who are willing to pay and use proven off-the-shelf
components
– Hyperscalers cannot do that → incentivized to minimize their OPEX
• Note: Electricity costs would probably be lower for such a large customer.

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 10
Dr. Stephan Schnez
Cooling the Data Center
Computer Room Air Handler – CRAH

• Cooling equipment of the data center is the biggest energy


consumer of a data center after the IT equipment itself.
• Conventional air cooling is the most common cooling method
• Computer-room air handlers (CRAHs) at the periphery of the
machine room (server racks, storage racks etc.) based on fans,
cooling coils, and a water-chiller system
• Refrigeration system usually outside the machine room, including
chillers, cooling towers, economizers for free cooling etc.
• Perforated raised floor tiles through which the cold air enters the
machine room
• Setup shown here, with cold and hot air mixing, will have a very
bad efficiency with a PUE >> 1, typically around 2.

L. Brochard et al., „Energy-Efficient Computing and Data Centers“, link;

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 11
Dr. Stephan Schnez
Cooling the Data Center
Cold and Hot Aisle Containment
• Aisle containment is required to prevent the mixing of hot and
cold air and to increase the cooling efficiency significantly
➔ PUE ~ 1.5 - 1.6
• Both hot-aisle (as shown to the left) and cold-aisle containment
have advantages with hot-aisle containment typically being
preferable:
– Temperature in server room is mostly cooled (typical design
temperature is 24°C) so that working hours of human beings
in that area are not restricted due to high T
– Higher outlet temperatures are possible so that more
economizer hours/free cooling are possible
➔ PUE ~ 1.1 or even lower with mainly free cooling
– Up to 15% reduction of annualized PUE of hot-aisle vs. cold-
aisle containment
• Temperature and operation classification of data centers
(Mission Critical Facilities, Data Centers, Technology Spaces
and Electronic Equipment, ASHRAE Technical Committee 9.9):
– Typical outlet temperatures around 27°C, keeping processor
around 80 – 90°C (typically max. 85°C)
– Can go up to 40°C or even 45°C (potentially in conflict with
OSHA)

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 12
Dr. Stephan Schnez
Cooling the Data Center
More Efficient Cooling Methods

• Air cooling more inefficient for higher IT power densities and

~40 kW rack

~67 kW rack
eventually unfeasible
• This limit is somewhere between a 30 – 50 kW server rack.
• Liquid cooling with (typically) water
– Direct-water cooling (DWC) of active server components
allowing for water inlet temperatures of up to 50°C
➔ PUE ~ 1.1 ASHRAE Technical Committee 9.9, „Water-Cooled Servers – Common
Designs, Components, and Processes“, link
– Indirect water cooling with rear-door heat exchanger (RDHX)
and cooling of the servers with air
➔ PUE ~ 1.3
• Active research:
– Microchannel cooling in the chip die
– Two-phase cooling
– Transforming heat with adsorption chillers

Lenovo ThinkSystem SD650 with direct-water cooling, link


Inlet water temperature of up to 50°C

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 13
Dr. Stephan Schnez
Recap: What is the target?
Reduced total energy consumption and increased efficiency

Total energy = IT energy + non−IT energy = IT energy × 𝑃𝑈𝐸

Increased efficiency by design: Increased efficiency by e.g. better cooling:


• New computing architecture/paradigm, e.g. in- • Hot and cold aisles
memory computing, quantum computing • Liquid cooling on rack/server level
• More efficient algorithms • Etc.

Further energy reductions:


• Consider the data center as an element in an energy grid and embedded in an environment, not as an independent entity
➔ This closes the loop with the main part of this lecture: Control and Integration of Grids

Two examples in the following


• Sector coupling: Use waste heat for a district heating network (DHN)
• Optimized operation with load scheduling

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 14
Dr. Stephan Schnez
Reuse of Waste Heat

Own consumption External processes


Space and floor heating Drying biomass
Domestic hot water Preheating water in power plants
Melting snow District heating
Producing cooling energy through Electricity production (organic
absorption/adsorption refrigeration Rankine cycle, thermoelectric
generator)
Water desalination

Two most important issues in waste-heat utilization:


1. Heat demand
Inefficient long-distance transport of heat (in contrast to electricity) → local heat demand required
2. Heat quality
Low-temperature heat with low exergy content: E.g. waste heat at 35°C (from an air-cooled data center) with exergy content of 8%
compared to 15% for waste heat of 60°C (from a water-cooled data center) and relative to 10°C ambient temperature.
➔ Strong impact on profitability of using waste heat

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 15
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating
Sector Coupling: Coupling of the Electricity and Heating Sector via a Data Center

Data center Heat pump District heating network

electricity low-grade waste heat high-grade heat

air cooled: T ~ 25-35°C T > 75°C


water cooled: T ~ 40-60°C (not required for low-T DHN)

electricity
Notes:
• For low-temperature heating networks („anergy grids“), a central heat pump for raising the temperature of the waste heat is not required.
• Cooling of the data center may also be achieved if the district heating networks provides for a cooling line as well.

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 16
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating

PUE in a first approximation when waste heat is reused with a heat pump:

Total energy IT load + cooling load IT load + IT load/𝐶𝑂𝑃 1


𝑃𝑈𝐸 = ~ ~ =1+
IT energy IT load IT load COP

Typical COP = 2 – 7 for heat pumps in such an application: E.g. COP = 4 ➔ PUE > 1.25
Better metric: Energy reuse effectiveness

Total energy − reused energy


𝐸𝑅𝐸 = ➔ 0 < 𝐸𝑅𝐸 ≤ 𝑃𝑈𝐸
IT energy

Even a perfect PUE = 1 does not consider that the waste heat is dissipated into the environment – or even reused.
• ERE is an alternative metric, designed for that purpose.
• ERE = 0: all the dissipated energy is reused ➔ even for a bad PUE, an ERE = 0 is possible
• ERE = PUE: no energy is reused

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 17
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating

• Conventional district heating networks (DHN) operate at >75°C.


➔ Waste heat from data center must be raised, typically with a heat
pump
• Return line of DHN can be used as cold source for cooling the data
center.
• Water-cooled data centers are better suited for waste-heat recovery:
– More efficient heat transfer from a liquid than from a gas
– COP of heat pump higher because of higher cooling water
temperature:
• Air-cooled with 35°C outlet temperature:
273 K + 75 K 348
𝐶𝑂𝑃 = = ≈9
75 K − 35 K 40 Configuration for a waste heat recovery system for a remote air-
cooled data center, which utilizes waste heat in DH.
From: M. Wahlroos et al., „Future views on waste heat utilization
• Water-cooled with 60°C outlet temperature: – Case of data centers in Northern Europe“, link
273 K + 75 K 348
𝐶𝑂𝑃 = = ≈ 23
75 K − 60 K 15

• In practice, typical COPs around 2 – 7

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 18
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating

Energetic benefits obvious, but business model not straightforward:


• For data-center operators:
– Increased CAPEX (for heat pump) and OPEX (electricity consumption
of heat pump → higher PUE)
– But new revenue stream: Selling of heat to DHN operator
• Data center operators and DHN operators with different objectives
(differing financial outcome expectations, reliability aspects etc.)

➔ Pilot cases demonstrate profitability Configuration for a waste heat recovery system for a remote air-
cooled data center, which utilizes waste heat in DH.
➔ No standard/established business models From: M. Wahlroos et al., „Future views on waste heat utilization
– Case of data centers in Northern Europe“, link

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 19
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating
Anergy Network Friesenberg of the Family Cooperation Zurich, Switzerland

Von Micha L. Rieser, Attribution, https://commons.wikimedia.org/w/index.php?curid=35596768

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 20
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating
Anergy Network Friesenberg of the Family Cooperation Zurich, Switzerland

Anergy grids, more commonly called „cold/low-temperature


district heating“ or „5th generation district heating and
cooling“

Anergy network Friesenberg, Zurich


• Bidirectional anergy network with two main line of ~3
kilometers:
– Warm line: 8 – 28°C
– Cold line: 4 – 24°C
• Direct integration of waste heat of two data centers (Credit
Suisse and Swisscom) with ~4.5 MW at ~24°C
• Cooling of data centers with ~16°C from cold line and
depleted underground seasonal heat storage during the
warm season
• Central energy plants with a total heat capacity of ~10 MW
(heat pumps)
• Target: carbon emission reductions by >90% for the heating F. Ruesch et al., “Potential and limitations of using low-temperature district
heating and cooling networks for direct cooling of buildings”, link
demand of the cooperation

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 21
Dr. Stephan Schnez
Reuse of Waste Heat for District Heating
Anergy Network Friesenberg of the Family Cooperation Zurich, Switzerland

Benefits of anergy grids


• Can be operated with only heat pumps and renewable electricity
• Lower temperature (in contrast to conventional DHN): Reduced heat
losses, insulation and space requirements as well as costs
• Local heat generation at the customer’s site at the required
temperature with local heat pumps
• Use of waste heat of data centers in warm line and cooling of data
centers via cold line
• No heat pumps required at the data centers for injection of waste
heat into anergy grid → direct injection of low-T waste heat

Benefits of anergy grids in comparison to conventional DHN are difficult


to quantify: Anergy grid

• Only limited experience, modelling/simulations not straightforward Underground seasonal heat storage
• Case-by-case analysis Central energy plants
• Maybe an essential building block for the energy transition, in with heat pumps
particular in colder countries Anergy network Friesenberg of the Family Cooperation Zurich, Switzerland: “Using waste heat from
data centers for heating an existing neighborhood throughout an anergy network and heat pumps”, link
More information in German: link

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 22
Dr. Stephan Schnez
Flexibility through Demand Response in Data Centers

Demand-side management (DSM) and demand-side response (DSR)/demand response (DR):


• Different definitions in the literature; sometimes not distinguished between the two
• Alternative definitions/explanations, e.g.:
– (English) Wikipedia: „Energy demand management, also known as demand-side
management (DSM) or demand-side response (DSR), is the modification of consumer
demand for energy through various methods such as financial incentives and behavioral
change through education. “
– Demand response (DR) is associated with power management (e.g. reducing peak loads)
and demand-side management (DSM) is associated with energy management (e.g. energy
reduction to achieve carbon-emission savings).
– In Germany, DSM is sometimes used in the context of the provision of operating reserve (for
frequency control), and DR as a voluntary optimization of the electricity system based on
price signals (e.g. via dynamic pricing).
• Whatever the precise terminology is – in an energy system with a high share of fluctuating
generators (wind and solar power), the demand side (i.e. residential and/or industrial
consumers) has to be flexible to adapt to the ever-changing supply of renewable electricity.
• “Flexibility in the energy system” is one of Prof. Weidlich’s main research areas.
Some possibilities of DSM/DR

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 23
Dr. Stephan Schnez
Flexibility through Demand Response in Data Centers
Flexibility options in data centers
Potential for demand response in the EU scenario
• Workload of 90 TWh data center energy use in 2030
(23 GW peak load, 10 GW average load)
– Consolidation: run as many virtual machines (workloads) as possible on a subset of
the total servers of a data center so that idle resources can be switched off
– Shifting: rescheduling/postponing of workloads for reducing utilization of ICT
resources
– Migration: geographically moving a workload from one data center to another
– Dynamic voltage and frequency scaling (DVFS): automatic underclocking of CPU;
particularly useful if workload is not CPU-bound
• Cooling system
– Increase temperature for limited times to reduce cooling power consumption
– Virtual energy storage: increase/decrease operating temperature in times of
limited/excess electricity supply (e.g. by renewable generation)
• Uninterrupted power supply (UPS)

Fully automated IT infrastructure of data centers enables demand response inside


data centers without any human intervention. C. Koronen et al., „ Data centres in future European energy
systems—energy efficiency, integration and policy “, link
➔ Strongly desired requirement for any (automated) energy management system

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 24
Dr. Stephan Schnez
Flexibility through Demand Response in Data Centers
Example of Google
Google‘s claims and targets
• Carbon neutral since 2007
• Since 2017, energy usage matched with 100 percent renewable energy purchases
• Next step: 24x7 carbon-free energy everywhere at Google’s data centers
➔ Integrate data centers with solar and wind power

Google’s carbon-intelligent computing platform for its hyperscale data centers:


• Goal: shift the timing of many compute tasks to when low-carbon power sources are available
• No additional computer hardware and no impact on performance of Google services, but only
shifting of non-urgent compute tasks
• Two types of forecasts for every day:
– Google’s partner Tomorrow (www.tmrow.com): average hourly carbon intensity of local
electrical grid
– Google internal forecast: hourly power resources needed by the data center for the same
period
– Align and optimize compute tasks with times of low-carbon electricity supply.
• First results demonstrate possibility of carbon-aware load shifting, but no quantitative A. Radovanovic (Google), „Our data centers now work harder
data given when the sun shines and wind blows “, link
• Next steps: Move compute tasks geographically as well
➔ Shift load in both time and location to maximize the reduction in grid-level CO2
emissions

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 25
Dr. Stephan Schnez
Flexibility through Demand Response in Data Centers
Some comments

Since operating costs of data centers are mostly electricity costs, operators are strongly incentivized to
• Use lowest-cost electricity → more and more renewables
• Increase efficiency (e.g. of cooling) → moving to cold climates, e.g. Scandinavia, to increase the hours with free cooling
• Explore DR methods
• However, this strongly depends on the business model of the data center operator:
– Hyperscalers (Google, Amazon, Microsoft…) are trailblazers because they operate the data centers for their purpose.
– Co-location data center operators must be more conservative (no liquid cooling or DR methods initially).

The possibilities of DR with data centers are quite obvious; however, implementations are rare:
• Academic studies with proof-of-concepts demonstrators on server level
• Some pioneering hyperscalers (e.g. Google) implement first pilots only very recently (Google‘s blog entry dates back to April 22, 2020)
• Potential of DR with data centers has not been quantified yet in demonstrations and/or not published
• Business model unclear for co-location data center operators

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 26
Dr. Stephan Schnez
Outlook
Future net-zero data center
• Integration with renewable electricity generation
• H2 electrolyzer, storage, fuel cell and backup power
• Adsorption chillers for reuse of waste heat
• Energy management system which enables DR of data centers
• Integration into (low-temperature) district heating networks

Legacy data center


• Diesel gen sets for back-up power
• Complete dissipation of waste heat

L. Brochard et al., „Energy-Efficient Computing and Data Centers“, link;

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 27
Dr. Stephan Schnez
Summary

• Data centers constitute a (rather new) electricity-intensive industry and with potentially sharp rising electricity consumption requirements.
• Moore’s law and Dennard scaling led to tremendous increases in energy efficiency.
• (High) electricity consumption is mainly and fundamentally due to the physics of the semiconductor building blocks (e.g. transistors) and to the
computing architecture (i.e. von-Neumann architecture) as well as algorithmic requirements.
• Almost all energy in a data center is dissipated as low-grade heat
• Efficient cooling is, thus, key: air cooling with containment aisles, free cooling, and liquid cooling (in particular for hyperscalers)
• The next step is to embed the data center into the wider energy system:
– Sector coupling by reusing waste heat
– Integration of renewable generation
– Demand response
– Hyperscalers are trailblazers in these fields because their OPEX is mainly determined by electricity costs and they own and operate the data
centers (in contrast to co-location data centers)
– All these are rather new developments (<10 years) and many questions are still open:
• Business models
• Quantification of benefits of DR/integration into DHN etc.
• More generally: will data centers/ICT industry pose an underestimated risk to a successful energy transition?

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 28
Dr. Stephan Schnez
Some open questions

• What is the data center energy/electricity consumption globally/in Europe/in Germany? How will this develop?
– Strongly diverging predictions
– Bottom-up model for prediction is required, not just extrapolation of historic data
– Benchmarking with existing data center operators
• Quantification of demand-response potential of a data center and development of business models
• Development of EMS/DR software for data centers
• Business-model analysis for reuse of waste heat of data centers: low- vs. high-T district heating networks
• Sizing of H2-electrolyzers/fuell cells and (on-site) renewable generation for reliable data-center operation

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 29
Dr. Stephan Schnez
Thank you for your attention!

Questions?

Proposals?

Feedback?

→ Feel free to contact me: schnez@posteo.de


Control and Integration of Grids, M.Sc. Sustainable Systems Engineering
University of Freiburg, January 2021 Slide 30
Dr. Stephan Schnez
Backup
Summary of Cold-Aisle Containment (CACS) vs. Hot-Aisle Containment (HACS)

An economizer is a part of the outdoor system,


most often mounted on the roof, of an HVAC
system for commercial buildings. The
economizer evaluates outside air temperature
and even humidity levels. When the exterior air
levels are appropriate, it uses the outside air to
cool your building. HVAC economizers use logic
controllers and sensors to get an accurate read
on outside air quality. As the economizer detects
the right level of outside air to bring in, it utilizes
internal dampers to control the amount of air that
gets pulled in, recirculated and exhausted from
your building.

J. Niemann et al., „ Hot-Aisle vs. Cold-Aisle Containment for Data Centers, White Paper 135“, link

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 31
Dr. Stephan Schnez
Backup
year discount factor cost discounted costs energy produced discounted energy
Levelized-Cost of Waste Heat from a Data Center 0 1.00 1240085 1240085 16863 16863
for a DHN 1 1.03 806960 783456 16863 16372
2 1.06 806960 760637 16863 15895
3 1.09 806960 738483 16863 15432
IT equip. power 3.5 MW
4 1.13 806960 716974 16863 14983
COP hp 4 5 1.16 806960 696091 16863 14546
P_el of hp 0.9625 MW_el 6 1.19 806960 675816 16863 14122
cost of hp 500000 EUR/MW_el 7 1.23 806960 656132 16863 13711
electricity price 180 EUR/MWh 8 1.27 806960 637022 16863 13312
average load of DC 0.5 9 1.30 806960 618468 16863 12924
discount rate 0.03 10 1.34 806960 600454 16863 12548
ratio of maintenance costs 0.1 11 1.38 806960 582965 16863 12182
12 1.43 806960 565986 16863 11827
13 1.47 806960 549500 16863 11483
14 1.51 806960 533496 16863 11148
15 1.56 806960 517957 16863 10824
16 1.60 806960 502871 16863 10508
17 1.65 806960 488224 16863 10202
• Typical cost of heat from a DHN in Germany: 6 – 10 18 1.70 806960 474004 16863 9905
ct/KWh 19 1.75 806960 460198 16863 9617
20 1.81 806960 446794 16863 9337
• LCOH here: 5 ct/kWh SUM 13245612.11 267741.8586
➔ Assumptions very rough, but business case not
LCOH 49.47 EUR/MWh
straightforward
0.05 EUR/kWh

Control and Integration of Grids, M.Sc. Sustainable Systems Engineering


University of Freiburg, January 2021 Slide 32
Dr. Stephan Schnez

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy