0% found this document useful (1 vote)
188 views

M 2 - Data Centre Requirements

This document discusses key considerations for data center design and requirements, including: - Required physical space for equipment, racks, and future expansion. Unoccupied space is also needed. - Power requirements to run all devices and ensure continuous uptime, including UPS, PDUs, and considering future needs. - Cooling and HVAC requirements to maintain safe operating temperatures for equipment, with capacity for present and future needs.

Uploaded by

alan siby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
188 views

M 2 - Data Centre Requirements

This document discusses key considerations for data center design and requirements, including: - Required physical space for equipment, racks, and future expansion. Unoccupied space is also needed. - Power requirements to run all devices and ensure continuous uptime, including UPS, PDUs, and considering future needs. - Cooling and HVAC requirements to maintain safe operating temperatures for equipment, with capacity for present and future needs.

Uploaded by

alan siby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT-2

Data Centre Requirements


• Data Centre Prerequisites
• Required Physical Area for Equipment and Unoccupied Space
• Required Power to Run All the Devices
• Required Cooling and HVAC
• Required Weight
• Required Network Bandwidth
• Budget Constraints
• Selecting a Geographic Location
• Safe from Natural Hazards
• Safe from Man-Made Disasters
• Availability of Local Technical Talent
• Abundant and Inexpensive Utilities Such as Power and Water
• Selecting an Existing Building (Retrofitting)
• Tier standard
Introduction

Due to the advancement in technology, data centres are becoming better in terms of its
usage and functionality. A few decades ago, some servers used to be crammed into a room to
provide for IT services. But today, the data centres are advanced facilities with rooms full of
servers, which run round the clock.

Enterprise’s data centre has to technologically meet the needs of dynamic business
demands. Further, a data centre works with different generations of products. If you look at the
variety of technologies used across the software, network, storage silos and servers, you will
understand the level of complexity and difficulty to manage such a set up. A poor data centre
design would demand a costly upgrade. Therefore, a data centre infrastructure should be
designed considering adaptation to changes in technology.

A data centre design requires thorough planning related to the location, hardware and
building infrastructure. Data centre facilities demand precise industrial design and engineering
requirements so as to meet the needs for fire-protection, power provisioning, stand-by power,
cooling, physical security and layout.

Due to the nature of functions that a data centre provides, certain considerations must be
factored in while designing and expanding a data centre. They are:

• Requirement for physical area and unoccupied space

• Power requirements

• Cooling requirements and Heating, Ventilation, and Air Conditioning (HVAC)

• Physical load bearing capacity of the floor

• Network bandwidth requirements

It discusses the key design considerations that need to be factored in while building a data
centre such as physical space requirement, the power and cooling requirements, determination of
load bearing capacity and network bandwidth requirements.

Requirement for Physical Area and Unoccupied Space

The following two examples help in understanding the physical capacity of a data centre:

• Available space for storage, network devices, server machines, network devices, power
panels, breakers and HVAC

• Available floor that can support the equipment weight


Fig. 2.1 Data Centre Servers

The number and types of equipment (servers, storage and network devices) placed in the
data centre has most impact on the size of data centre required. The equipment can be placed on
racks or directly on the floor based on their size.

The small-sized equipment such as small servers and storage devices are kept within
racks. As the small servers and storage devices are placed vertically within the racks, a lot of
space can be saved.

If the equipment is large in size they can be placed on the floor directly. The EMC
Symmetric Storage Array server (dimensions: 75 × 9 × 36 inches) or the IBM Enterprise Storage
Server (dimensions: 75 × 55 × 36 inches) are good examples of this.

Racks need to be selected based on the number of devices that would need to be placed
within it. Racks are available in varied sizes. Typically, the dimensions of a full-height rack
would be 84 x 22 x 30 inches (lbh), with the internal dimensions being 78 x 19 x 28 inches.
Figure 2.2 shows the data centre racks.
Fig. 2.2 Data Centre Racks

To measure the height of equipment placed in racks, a unit U is used.1U is equivalent


to1.75 inches. Accordingly, if the internal height of the rack is 78 inches, then the maximum
height of the device it can accommodate would be 45U. It indicates that you should know the
height of each device that would be placed in the racks in terms of U.

In a data centre, racks and large servers occupy the maximum space, sometimes between
50%-60% of the total space. The remaining 50% - 40% of the space is used for:

• Aisles and ramps

• Space between rows of racks and next to walls

• Perforated tiles (required so that the racks get the cold air from the sub-floor plenum)

• Open space to exhaust air from the racks to the HVAC plenum

While deciding the amount of space required setup a data centre, one should also plan
additional space for future expansion of the data centre. If the future expansion is not considered,
then it would become difficult to accommodate more equipment into a live data centre. To
expand a live data centre, there would be considerable amount of renovation required to the
wires, HVAC and electricity points.

While calculating the overall area, consider how many additional servers may occupy the
existing space in the coming years. Though one cannot predict the exact increase in the number
of equipment in the future, it is important to make prudent decisions rather than go for expensive
remodelling later on.

Requirement for Power to Run all the Devices

Typically, the devices in a data centre would use the rack mounted or internal
Alternating Current (AC) and Direct Current (DC) power supply. The data centre receives
power from the power supply grid in the form of AC power. The same AC power is then
distributed through the electrical devices, which are a part of the data centre infrastructure.
However, most hardware devices and the backup devices require DC power.

The key devices which are used in electrical power distribution within a data centre
include:

• Switchboard: To direct the electricity from multiple power supply sources to the
devices those need the power.

• Switchgear: Consists of circuits and fuses used for protecting and isolating the
electrical equipment.

• Backup power sources: Such as a generator that switch on automatically in case of


power outage.

• Power Distribution Unit (PDU): Such as rack mounted power strips to distribute
electricity to the devices in a data centre. The power strips mounted on the rack are as
shown in the figure 2.3.

• Auxiliary conditioning equipment: Such as line filters and capacitor bank to filter out
the undesirable frequencies.
Fig. 2.3 PDU for a Rack

A data centre should always have the power running without any interruption. Therefore,
it is important to install Uninterruptible Power Supplies (UPSs) so that the devices are protected
against power failures. The UPS will turn on as soon as the electricity fails, by switching the
current load to a set of batteries.

UPS can be of two types:

• Online UPS: Power filters through the batteries at all times.

• Switchable UPS: Power from batteries is used only when power fails.
Fig. 2.4 UPS

The UPS models can also vary depending on their load carrying capacity and duration of
power supply they can provide. Some UPSs can provide power up to an hour, some even longer.
So, using UPSs is a good solution for power outages that are of short duration. However, if the
data centre location experiences frequent long-duration power outages, then generators would
need to be installed.

Before you install the power supply devices, you should know how much power is
required for the devices of the data centre. This information is important to decide the power
supply needs such as:

• Number of breakers

• Outlet types

• Single-phase or three-phase connections

• Data centre wiring layout

• Watts required per rack


As retrofitting power conduits and wiring is difficult, it is important to keep the future
expansion needs in mind while deciding the required power capacity, Watts is used as the power
measurement unit.

Use the following formula to calculate the power (watts) required for each equipment:

Power (watts) = volts x amperes

Where,

Ampere = unit of electric current

Volt = measure of electric potential between two points

So, when 200 volts is applied to a circuit that draws five amperes of electric current, it
will dissipate 1000 watts of power. Voltage is a very important measure when considering the
power needs. Assume voltage is akin to water pressure. If water pressure within a pipe is too
high, the pipe may burst. In the same way, if the power supplied does not comply, the equipment
will be damaged. Hence, the power that an equipment requires must be considered while
planning the electrical work.

Required Cooling and HVAC

In data centres, the Heating, Ventilation, and Air Conditioning (HVAC) system controls
the ambient environment (temperature, humidity, air flow, and air filtering). Therefore, HVAC
must be planned for and operated along with other data centre components. The selection of an
HVAC contractor is important while designing a data centre.

To keep the devices cool and maintain low humidity within the data centre, HVAC is
required. Like power, the HVAC system is very difficult to retrofit. Therefore, the HVAC
system must have enough cooling capacity to meet present and forecasted future needs.

Cooling requirements are measured in British Thermal Units (BTUs) per hour. The
capacity of the cooling equipment such as air conditioners is provided by the HVAC
manufacturer in terms of BTUs. This measurement is important, as you need to calculate the
consolidated BTUs per hour required for running all the equipment within the data centre.

e.g.: Consider that the sum of BTUs for all equipment is 500,000 BTUs. In this case, if
the cold air is delivered by the HVAC at 80% efficiency, then it should have a rating of 625,000
BTUs per hour.
The air-flow pattern within a data centre required for optimum cooling of the equipment
is maintained by the under-floor pressure. This pressure depends on the HVAC unit and the
pattern of how the solid and perforated tiles are placed on the raised floor of the data centre.

Most of the data centres use raised flooring or drop-down ceiling. The space below the
flooring or between the structural ceilings is called a plenum. This space is kept to allow for the
smooth air circulation within the communication cabling and the HVAC equipment. Fig. 2.5
shows how the plenum allows the cool air to flow towards the equipment through the perforated
flooring and how the hot air escapes through the ceiling.

Fig. 2.5 Plenum for Air Flow Circulation

Required Weight

The data centres pack in more bulk in the number of equipment they accommodate. Thus,
the total weight of the racks, cabinets, and devices can really have an impact on the floor
structures, as they can be heavier than what most floors can support. Even though equipment are
getting sleeker, it only leads to more equipment being packed into the racks. Further, enterprise
level servers are getting heavier, which in turn stress the flooring. Therefore, it is important to
consider the floor loading capacity while designing a data centre.

To calculate the total weight of the equipment on the floor, you should calculate the
following:

• Weight of each empty rack and cabinet without any device in it. To get the total
weight of the racks and cabinets, add the weight of all the empty racks and cabinets
present.
• Approximate weight of each device that would go inside the rack. To get the total
weight of all the devices, add the weight of all the devices present.

• Approximate weight of the large servers that will be placed directly on the floor.

After calculating, the total of the above listed components will give you a fair idea of the
weight that the floor will have to support. Once you know the approximate weight, next you need
to check if the existing floor would be able to sustain the calculated weight. This is important to
determine if any component of the flooring such as tile quality or support grid would need to
be changed. There are three load impact points. They are:

• Maximum weight that the data centre floor supports

• Maximum weight that a single tile supports

• Maximum point load that a tile can support

Fig. 2.6 shows the three load impact points that need to be considered while determining if the
floor will be able to support the weight.

Fig. 2.6 Floor Load Support Considerations

• Maximum weight that the data centre floor supports: These details are required to help you
in deciding, if the raised floor would be able to support the present and future loads.
• Maximum weight that a single tile supports: The type of tile and its base material will
determine the maximum load it can support. Tiles can be made of materials such as concrete
and aluminum. The following two types of tiles are generally used for flooring:

• Solid tiles

• Perforated or grated tiles

Fig. 2.7 Solid and Perforated Floor Tiles

The perforated tiles provide a great deal of flexibility in controlling air-flow patterns
between the plenum and equipment. The solid tiles redirect air flow and help preserve pressure
in the plenum or sub floor. If the tiles are made of cast aluminum, they will not weaken by
perforation or grating.

• Maximum point load that a tile can support: Four casters or rollers are used to support the
standalone equipment or racks in a data centre.

With the advancement in technology, devices have become smaller and sleeker without
compromising on performance. Therefore, enterprise servers and high-capacity storage
subsystems pack more devices within them, leading to more weight on a smaller size and
footprint. Such equipment can strain the floor of the data centre.
e.g.: The weight of a fully configured IBM p630 server is 112 pounds. If you are packing five
such servers on a rack, then the point load on a tile (what each caster would support) would be:

Total weight = weight of server + weight of rack + weight of large server ( if available)

= (5 x 112) pounds + 100 pounds

= 660 pounds

Point load on a tile = Total weight / 4

= 660 / 4

= 165 pounds

The point load on a tile would be 165 pounds. Now, if a tile is supporting two such
casters, then its point load strength should be 330 pounds or higher.

Generally, a data centre would have the same strength tiles throughout. However, there
may be cases where the budget is a constraint requiring some amount flooring to be different.
For example, a part of the data centre may hold the lighter racks of low-end servers. That part of
the data centre can have low-rated flooring.

A very important design consideration of having low rated flooring is that it should never
be placed near the entrance or ramp. If such a floor is between the entrance and the high-rated
floor, it will get damaged when heavy equipment is rolled over them. Since it is expensive to
replace floor tiles in a live data centre, it is very important to consider the load capacity of the
floor tiles before moving the equipment over them.

Required Network Bandwidth

The network bandwidth that the Internet Service Provider (ISP) offers should be at
least equal to the data centre’s inbound and outbound bandwidth specifications. Also, robust
ISP links are mandatory for business-critical servers that need to be connected to the internet.
Since the servers need to be up 24x7, there should be a redundant internet connection with two or
more feeds coming from different ISPs.

Typically, maximum bandwidth requirements of a data centre can be met by using


network cables. The network cables can be single mode fibre cables or multi-mode fibre
cables. A common type of cable used for networking is the Category 5 (CAT5) copper fibre. A
CAT5 cable is a multi-pair, high performance cable, made up of twisted pair conductors. These
cables are used for Ethernet networks with a data rate of 10 or 100 Mbps. Figure 2.1.10 shows
CAT5 cables.
Fig. 2.8 CAT5 Cables

Single-mode fibre links would be needed in the following cases:

• WAN connections

• Network attached storage (NAS) heads

• Network IO bound servers

The single-mode fibres can give a higher bandwidth as compared to the multi-mode
fibres. Single-mode fibre cables can provide a data rate of up to 10 Gbps. However, they are
more expensive than multi-mode fibres.

Calculate the total number of copper and fibre connections that would be needed for all
the equipment to fulfil the bandwidth requirements.

Budget Constraints
Building a data centre is a cost-intensive initiative. More often than not, there are budget
constraints to building a data centre. The design and capability of a data centre depends on the
budget allocated to setup a facility.

There are two types of budgets that an organisation allots for designing and maintaining a
data centre:

• Build budget: It refers to the budget allotted for the design, construction, and taking a
data centre live.

• Run budget: It refers to the budget allotted for running the data centre operations and
its maintenance.

While deciding the build budget, the following questions need to be answered:
• What is the initial amount that the organisation has decided to use for the
construction of a data centre?

• What is the cost of the equipment and infrastructure?

• Will the allotted amount be sufficient to design a data centre with the expected
infrastructure requirements? (capacity, uptime requirements, building codes)

• Will the amount be enough to purchase and install mandatory equipment such as
internet connectivity, UPS, HVAC and generators?

• In what stages or periods would the funds be released to pay for all the incurred
expenses towards setting a data centre?

In an ideal situation, you would receive all the funds required to build the initial setup.
However, in reality, you may face budget constraints forcing you to make some compromises. In
such a case, you should decide on not having certain features, such as redundant HVAC units or
generators.

Another method to achieve the given budget is by deciding the areas that can be added or
upgraded later. Though it is difficult to upgrade a live data centre, a budget constraint would
necessitate taking some risks.

Along with determining the build budget, is important to decide on the run budget as
well. A run budget will be used for the on-going operations and upkeep of the data centre.

Some examples of the expenses that make use of the run budget include:

• Recurring expenses (hardware and software supplies)

• Cleaning expenses

• Utility costs such as phone and electricity

• Network connectivity charges

Selecting a Geographic Location


Selecting a suitable geographic location for the data centre requires analysing several
factors and weighing the risks involved. Another important decision that needs to be taken is
whether to design the data centre within an existing building or purchase land and construct the
data centre building. Often, an existing structure is used to build the data centre.

Irrespective of the decision taken, either constructing on a land or using an existing


building, there are some factors that need to be considered for location selection:

• The safety of facility from natural hazards

• The safety of facility from man-made disasters


• Availability and cost of utilities

• Availability of resources

Safe from Natural Hazards


Some natural hazards such as tornadoes and earthquakes can damage buildings badly.
Ideally, you should consider a location with a very low possibility of natural hazards such as
floods, fire, tornadoes or earthquakes. If it is difficult to identify such an area, then you should
identify the natural hazards that are most likely to occur within the data centre lifetime.

Generally, a data centre structure would stand for 20 or 30 years without undergoing
major renovation. Study the environment of the desired location to understand if there would be
any major natural disasters during the lifetime of the data centre structure. Such a study will
enable you to make design changes to mitigate the impact of the disaster.

e.g.: If the location is prone to earthquakes, then the building could be constructed to be
earthquake-resistant. In the event of an earthquake, such buildings would gently rock, but will
not warp or crack.

You should also consider the geography of the desired location. Areas that are close to a
river, in a valley, or at the bottom of a hill are more likely to be flooded as compared to areas on
a mountain or plateau.

Safe from Man-made Disasters


Nature is not the only source of damage to a data centre. There are many ‘invisible’ man-
made elements that can cause damage to the data centre structure and its equipment.

A data centre should not be located in close proximity to the:

• Airport (as well as their flight paths)

• Electrical railways

• Telecommunications signal centre

These kind of structures emit a high level of radio frequency interference (RFI) and
electromagnetic interference (EMI). Such emissions can hamper computer network and
hardware operations.

Also, avoid sites that are close to a quarry, mine or a heavy industrial plant. These
activities cause vibrations that may disrupt racks and servers within the data centre or damage
the utilities outside the data centre.

Industrial pollution is another factor that can damage the equipment in a data centre.
Some examples of industrial pollution include sewage treatment plants and chemical
factories. The chemical waste and other toxic elements can get inside the data centre area and
damage the equipment as well as impact the health of the employees. In case the data centre has
to be built near such a hazard, then filtration system should be installed to remove the
contaminants.

Availability of Local Technical Talents


The desired location should have native talent available to meet the job demands of a data
centre. While evaluating a site, consider the skill level and technical knowledge of the local
human resources. Ideally, the location that you select should attract the local population and
outstation candidate who would readily relocate.

There are two major types of resources employed in a data centre:

• Administrative resources: Such as housekeeping and security.

• Technical resources: Such as electrical engineers, mechanical engineers and software


administrators.

Further, if an organization decides to build a data centre in a foreign location, it may face
difficulty in hiring and retaining people who are technically very talented and expensive. This
is especially true with organisations, which are planning a global network of data centre. The
organisations can overcome these difficulties by migrating their data centre to places where the
cost of property is low and the work force is less-expensive.

Abundant and Inexpensive Utilities


The data centre location should have continuous availability of utilities such as power
and water supply. A data centre consumes a lot of power and energy. The power is primarily
consumed to keep the machines and HVAC equipment running round the clock. Due to high
consumption of power, it becomes important to consider its recurring cost in the long term.

Globally, energy costs can vary widely across locations. Therefore, the following needs to
be checked:

• Local power utility rates

• Utility and energy incentives

Some countries offer tax and financial incentives to build data centre. Also consider the
economic conditions. Huge market demand for constrained power resources could mean limited
supply of power to the data centre.

If the data centre are placed in a remote rural location, there will be a heavy impact on the
availability of power and water utilities. The cost of high voltage utility services and redundant
feeds for electrical supply would be quite high.

The energy efficiency of the data centre is directly affected by the site selected. Cooling
takes up a large amount of energy. Cooling is directly related to the location’s ambient
temperature and humidity. A location with moderate climate can mean a significant amount of
savings over the energy cost.

Retrofitting
Once we are done with the selection of location for the data centre, the next phase is the
construction of that data centre. However, always constructing a new building is not an only
option. A data centre can be built within an existing structure, which is called as
retrofitting. There are some infrastructure decisions that you should take while planning to
construct a data centre in an existing structure:

• The first two floors of the building may be used for the data centre.

• There should be adequate parking space for delivery trucks.

• Easy access of emergency vehicles (e.g.: Ambulances and fire trucks) is also an
important requirement for developing a data centre in an existing building.

Now let us look into those factors which help to evaluate a building for data centre.

• There should be ample space for constructing a raised floor.

• The height between the floor and ceiling must be adequate. It should be able to
vertically accommodate:

• Sub-floor plenum

• Ceiling plenum

• Approximately 8 feet height is required for racks

• Space above equipment for hot air to circulate

• The sub-floor plenum should be enough for the electrical cables. Also, it should not
obstruct the air flow.

• The building should have sufficient power and redundant electrical grids so that power
is available 24x7.

• The room or floor should be enough to accommodate any future expansion of


equipment. It is better if the facility has the possibility to remove walls and construct new
ones without any damage to the structural integrity of the building.

• The floor should be strong enough to bear the weight of the equipment.

• The exhaust from generators should be far from the air intake source.

• The water and gas pipes should be in good condition and free from leaks.
• The pipes should not run above the equipment as any leak could damage the
equipment.

Tier Standard
With organisations and service providers building their data centre, it becomes important
to measure the uptime that a data centre provides. Compromising on certain design factors while
constructing a data centre can lead to equipment or infrastructure failure resulting in downtime.
A downtime can prove to be very costly for a business. To measure the effectiveness of this in
terms of data availability, the Uptime Institute has created a data centre tier classification
system. The kind of data centre facility to be constructed is determined by the tier on which the
organisation wants the data centre to be on.

The Uptime Institute is an organisation that is focused on improving the efficiency,


availability, and performance of business critical infrastructure such as data centre through
collaboration and certifications. This institute is globally recognised for its tier standards and
certifications developed for data centre design and operations. As per this tier system, a data
centre can be classified into four tiers or categories:

• Tier I data centre

• Tier II data centre

• Tier III data centre

• Tier IV data centre

The data centre are classified in terms of:

• Potential site infrastructure performance, or uptime

• Power outage protection

• Redundancy provided
Fig. 2.9 Data Centre Tiers

Also, the cost and operational complexities to manage the data centre increases with each level.
So, it is important to understand the features of each tier and determine which tier would be
suitable for the data centre.

Tier I data centre:

This refers to a data centre that provides basic capacity. The data centre have a single
path for distributing power and cooling. The data centre does not have any redundant
components. Since such data centre do not provide redundancy, they do not need to adhere to
strict uptime requirements. A Tier I data centre should provide 99.671% uptime with not more
than 28.8 hours downtime per year. Tier I data centre are typically used by small businesses.

The Tier I data centre infrastructure includes:

• A dedicated space for IT systems

• UPS for short duration power outages

• Dedicated cooling equipment

• Engine generator for long duration power outages

Tier II data centre:

The Tier II data centre provides all the facilities of a Tier I data centre. Additionally, it
should provide partial redundancy through redundant power and cooling components. The
redundant components provide extra security against IT process disruptions due to infrastructure
equipment failure. A Tier II data centre should provide 99.749% uptime with not more than 22
hours downtime per year. Tier II data centre are typically used by mid-sized businesses.
The redundant capacity components include:

• Power equipment such as UPS and generators

• Cooling equipment such as chillers or pumps

Tier III data centre:

A Tier III data centre provides all the facilities that a Tier I and Tier II data centre
provides. Additionally, a Tier III data centre provides multiple distribution paths for power
and cooling. This kind of redundancy enables the data centre to function without any for
replacement or maintenance of equipment.

All IT equipment in the data centre are dual-powered. They require to be protected from
72 hours of power outage. The uptime requirement for such data centre are quite high at
99.982% with not more than 1.6 hours of downtime per year. Tier III data centre 98are
typically used by global organisations.

Tier IV data centre:

A Tier IV data centre should provide all the facilities that a Tier I, Tier II and Tier III
provides. Additionally, fault tolerance is added to the site infrastructure topology. With fault
tolerance, the IT operations will continue to function in the event of an equipment failure or
distribution path interruptions. In these data centre, all IT equipment requires to be protected
against 96 hours of power outage.

The uptime requirement for such data centre are the highest at 99.995% with not more
than 2.4 minutes of downtime per year. Tier IV data centre are typically used by large global
organisations with highly sensitive and mission-critical data.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy