M 2 - Data Centre Requirements
M 2 - Data Centre Requirements
Due to the advancement in technology, data centres are becoming better in terms of its
usage and functionality. A few decades ago, some servers used to be crammed into a room to
provide for IT services. But today, the data centres are advanced facilities with rooms full of
servers, which run round the clock.
Enterprise’s data centre has to technologically meet the needs of dynamic business
demands. Further, a data centre works with different generations of products. If you look at the
variety of technologies used across the software, network, storage silos and servers, you will
understand the level of complexity and difficulty to manage such a set up. A poor data centre
design would demand a costly upgrade. Therefore, a data centre infrastructure should be
designed considering adaptation to changes in technology.
A data centre design requires thorough planning related to the location, hardware and
building infrastructure. Data centre facilities demand precise industrial design and engineering
requirements so as to meet the needs for fire-protection, power provisioning, stand-by power,
cooling, physical security and layout.
Due to the nature of functions that a data centre provides, certain considerations must be
factored in while designing and expanding a data centre. They are:
• Power requirements
It discusses the key design considerations that need to be factored in while building a data
centre such as physical space requirement, the power and cooling requirements, determination of
load bearing capacity and network bandwidth requirements.
The following two examples help in understanding the physical capacity of a data centre:
• Available space for storage, network devices, server machines, network devices, power
panels, breakers and HVAC
The number and types of equipment (servers, storage and network devices) placed in the
data centre has most impact on the size of data centre required. The equipment can be placed on
racks or directly on the floor based on their size.
The small-sized equipment such as small servers and storage devices are kept within
racks. As the small servers and storage devices are placed vertically within the racks, a lot of
space can be saved.
If the equipment is large in size they can be placed on the floor directly. The EMC
Symmetric Storage Array server (dimensions: 75 × 9 × 36 inches) or the IBM Enterprise Storage
Server (dimensions: 75 × 55 × 36 inches) are good examples of this.
Racks need to be selected based on the number of devices that would need to be placed
within it. Racks are available in varied sizes. Typically, the dimensions of a full-height rack
would be 84 x 22 x 30 inches (lbh), with the internal dimensions being 78 x 19 x 28 inches.
Figure 2.2 shows the data centre racks.
Fig. 2.2 Data Centre Racks
In a data centre, racks and large servers occupy the maximum space, sometimes between
50%-60% of the total space. The remaining 50% - 40% of the space is used for:
• Perforated tiles (required so that the racks get the cold air from the sub-floor plenum)
• Open space to exhaust air from the racks to the HVAC plenum
While deciding the amount of space required setup a data centre, one should also plan
additional space for future expansion of the data centre. If the future expansion is not considered,
then it would become difficult to accommodate more equipment into a live data centre. To
expand a live data centre, there would be considerable amount of renovation required to the
wires, HVAC and electricity points.
While calculating the overall area, consider how many additional servers may occupy the
existing space in the coming years. Though one cannot predict the exact increase in the number
of equipment in the future, it is important to make prudent decisions rather than go for expensive
remodelling later on.
Typically, the devices in a data centre would use the rack mounted or internal
Alternating Current (AC) and Direct Current (DC) power supply. The data centre receives
power from the power supply grid in the form of AC power. The same AC power is then
distributed through the electrical devices, which are a part of the data centre infrastructure.
However, most hardware devices and the backup devices require DC power.
The key devices which are used in electrical power distribution within a data centre
include:
• Switchboard: To direct the electricity from multiple power supply sources to the
devices those need the power.
• Switchgear: Consists of circuits and fuses used for protecting and isolating the
electrical equipment.
• Power Distribution Unit (PDU): Such as rack mounted power strips to distribute
electricity to the devices in a data centre. The power strips mounted on the rack are as
shown in the figure 2.3.
• Auxiliary conditioning equipment: Such as line filters and capacitor bank to filter out
the undesirable frequencies.
Fig. 2.3 PDU for a Rack
A data centre should always have the power running without any interruption. Therefore,
it is important to install Uninterruptible Power Supplies (UPSs) so that the devices are protected
against power failures. The UPS will turn on as soon as the electricity fails, by switching the
current load to a set of batteries.
• Switchable UPS: Power from batteries is used only when power fails.
Fig. 2.4 UPS
The UPS models can also vary depending on their load carrying capacity and duration of
power supply they can provide. Some UPSs can provide power up to an hour, some even longer.
So, using UPSs is a good solution for power outages that are of short duration. However, if the
data centre location experiences frequent long-duration power outages, then generators would
need to be installed.
Before you install the power supply devices, you should know how much power is
required for the devices of the data centre. This information is important to decide the power
supply needs such as:
• Number of breakers
• Outlet types
Use the following formula to calculate the power (watts) required for each equipment:
Where,
So, when 200 volts is applied to a circuit that draws five amperes of electric current, it
will dissipate 1000 watts of power. Voltage is a very important measure when considering the
power needs. Assume voltage is akin to water pressure. If water pressure within a pipe is too
high, the pipe may burst. In the same way, if the power supplied does not comply, the equipment
will be damaged. Hence, the power that an equipment requires must be considered while
planning the electrical work.
In data centres, the Heating, Ventilation, and Air Conditioning (HVAC) system controls
the ambient environment (temperature, humidity, air flow, and air filtering). Therefore, HVAC
must be planned for and operated along with other data centre components. The selection of an
HVAC contractor is important while designing a data centre.
To keep the devices cool and maintain low humidity within the data centre, HVAC is
required. Like power, the HVAC system is very difficult to retrofit. Therefore, the HVAC
system must have enough cooling capacity to meet present and forecasted future needs.
Cooling requirements are measured in British Thermal Units (BTUs) per hour. The
capacity of the cooling equipment such as air conditioners is provided by the HVAC
manufacturer in terms of BTUs. This measurement is important, as you need to calculate the
consolidated BTUs per hour required for running all the equipment within the data centre.
e.g.: Consider that the sum of BTUs for all equipment is 500,000 BTUs. In this case, if
the cold air is delivered by the HVAC at 80% efficiency, then it should have a rating of 625,000
BTUs per hour.
The air-flow pattern within a data centre required for optimum cooling of the equipment
is maintained by the under-floor pressure. This pressure depends on the HVAC unit and the
pattern of how the solid and perforated tiles are placed on the raised floor of the data centre.
Most of the data centres use raised flooring or drop-down ceiling. The space below the
flooring or between the structural ceilings is called a plenum. This space is kept to allow for the
smooth air circulation within the communication cabling and the HVAC equipment. Fig. 2.5
shows how the plenum allows the cool air to flow towards the equipment through the perforated
flooring and how the hot air escapes through the ceiling.
Required Weight
The data centres pack in more bulk in the number of equipment they accommodate. Thus,
the total weight of the racks, cabinets, and devices can really have an impact on the floor
structures, as they can be heavier than what most floors can support. Even though equipment are
getting sleeker, it only leads to more equipment being packed into the racks. Further, enterprise
level servers are getting heavier, which in turn stress the flooring. Therefore, it is important to
consider the floor loading capacity while designing a data centre.
To calculate the total weight of the equipment on the floor, you should calculate the
following:
• Weight of each empty rack and cabinet without any device in it. To get the total
weight of the racks and cabinets, add the weight of all the empty racks and cabinets
present.
• Approximate weight of each device that would go inside the rack. To get the total
weight of all the devices, add the weight of all the devices present.
• Approximate weight of the large servers that will be placed directly on the floor.
After calculating, the total of the above listed components will give you a fair idea of the
weight that the floor will have to support. Once you know the approximate weight, next you need
to check if the existing floor would be able to sustain the calculated weight. This is important to
determine if any component of the flooring such as tile quality or support grid would need to
be changed. There are three load impact points. They are:
Fig. 2.6 shows the three load impact points that need to be considered while determining if the
floor will be able to support the weight.
• Maximum weight that the data centre floor supports: These details are required to help you
in deciding, if the raised floor would be able to support the present and future loads.
• Maximum weight that a single tile supports: The type of tile and its base material will
determine the maximum load it can support. Tiles can be made of materials such as concrete
and aluminum. The following two types of tiles are generally used for flooring:
• Solid tiles
The perforated tiles provide a great deal of flexibility in controlling air-flow patterns
between the plenum and equipment. The solid tiles redirect air flow and help preserve pressure
in the plenum or sub floor. If the tiles are made of cast aluminum, they will not weaken by
perforation or grating.
• Maximum point load that a tile can support: Four casters or rollers are used to support the
standalone equipment or racks in a data centre.
With the advancement in technology, devices have become smaller and sleeker without
compromising on performance. Therefore, enterprise servers and high-capacity storage
subsystems pack more devices within them, leading to more weight on a smaller size and
footprint. Such equipment can strain the floor of the data centre.
e.g.: The weight of a fully configured IBM p630 server is 112 pounds. If you are packing five
such servers on a rack, then the point load on a tile (what each caster would support) would be:
Total weight = weight of server + weight of rack + weight of large server ( if available)
= 660 pounds
= 660 / 4
= 165 pounds
The point load on a tile would be 165 pounds. Now, if a tile is supporting two such
casters, then its point load strength should be 330 pounds or higher.
Generally, a data centre would have the same strength tiles throughout. However, there
may be cases where the budget is a constraint requiring some amount flooring to be different.
For example, a part of the data centre may hold the lighter racks of low-end servers. That part of
the data centre can have low-rated flooring.
A very important design consideration of having low rated flooring is that it should never
be placed near the entrance or ramp. If such a floor is between the entrance and the high-rated
floor, it will get damaged when heavy equipment is rolled over them. Since it is expensive to
replace floor tiles in a live data centre, it is very important to consider the load capacity of the
floor tiles before moving the equipment over them.
The network bandwidth that the Internet Service Provider (ISP) offers should be at
least equal to the data centre’s inbound and outbound bandwidth specifications. Also, robust
ISP links are mandatory for business-critical servers that need to be connected to the internet.
Since the servers need to be up 24x7, there should be a redundant internet connection with two or
more feeds coming from different ISPs.
• WAN connections
The single-mode fibres can give a higher bandwidth as compared to the multi-mode
fibres. Single-mode fibre cables can provide a data rate of up to 10 Gbps. However, they are
more expensive than multi-mode fibres.
Calculate the total number of copper and fibre connections that would be needed for all
the equipment to fulfil the bandwidth requirements.
Budget Constraints
Building a data centre is a cost-intensive initiative. More often than not, there are budget
constraints to building a data centre. The design and capability of a data centre depends on the
budget allocated to setup a facility.
There are two types of budgets that an organisation allots for designing and maintaining a
data centre:
• Build budget: It refers to the budget allotted for the design, construction, and taking a
data centre live.
• Run budget: It refers to the budget allotted for running the data centre operations and
its maintenance.
While deciding the build budget, the following questions need to be answered:
• What is the initial amount that the organisation has decided to use for the
construction of a data centre?
• Will the allotted amount be sufficient to design a data centre with the expected
infrastructure requirements? (capacity, uptime requirements, building codes)
• Will the amount be enough to purchase and install mandatory equipment such as
internet connectivity, UPS, HVAC and generators?
• In what stages or periods would the funds be released to pay for all the incurred
expenses towards setting a data centre?
In an ideal situation, you would receive all the funds required to build the initial setup.
However, in reality, you may face budget constraints forcing you to make some compromises. In
such a case, you should decide on not having certain features, such as redundant HVAC units or
generators.
Another method to achieve the given budget is by deciding the areas that can be added or
upgraded later. Though it is difficult to upgrade a live data centre, a budget constraint would
necessitate taking some risks.
Along with determining the build budget, is important to decide on the run budget as
well. A run budget will be used for the on-going operations and upkeep of the data centre.
Some examples of the expenses that make use of the run budget include:
• Cleaning expenses
• Availability of resources
Generally, a data centre structure would stand for 20 or 30 years without undergoing
major renovation. Study the environment of the desired location to understand if there would be
any major natural disasters during the lifetime of the data centre structure. Such a study will
enable you to make design changes to mitigate the impact of the disaster.
e.g.: If the location is prone to earthquakes, then the building could be constructed to be
earthquake-resistant. In the event of an earthquake, such buildings would gently rock, but will
not warp or crack.
You should also consider the geography of the desired location. Areas that are close to a
river, in a valley, or at the bottom of a hill are more likely to be flooded as compared to areas on
a mountain or plateau.
• Electrical railways
These kind of structures emit a high level of radio frequency interference (RFI) and
electromagnetic interference (EMI). Such emissions can hamper computer network and
hardware operations.
Also, avoid sites that are close to a quarry, mine or a heavy industrial plant. These
activities cause vibrations that may disrupt racks and servers within the data centre or damage
the utilities outside the data centre.
Industrial pollution is another factor that can damage the equipment in a data centre.
Some examples of industrial pollution include sewage treatment plants and chemical
factories. The chemical waste and other toxic elements can get inside the data centre area and
damage the equipment as well as impact the health of the employees. In case the data centre has
to be built near such a hazard, then filtration system should be installed to remove the
contaminants.
Further, if an organization decides to build a data centre in a foreign location, it may face
difficulty in hiring and retaining people who are technically very talented and expensive. This
is especially true with organisations, which are planning a global network of data centre. The
organisations can overcome these difficulties by migrating their data centre to places where the
cost of property is low and the work force is less-expensive.
Globally, energy costs can vary widely across locations. Therefore, the following needs to
be checked:
Some countries offer tax and financial incentives to build data centre. Also consider the
economic conditions. Huge market demand for constrained power resources could mean limited
supply of power to the data centre.
If the data centre are placed in a remote rural location, there will be a heavy impact on the
availability of power and water utilities. The cost of high voltage utility services and redundant
feeds for electrical supply would be quite high.
The energy efficiency of the data centre is directly affected by the site selected. Cooling
takes up a large amount of energy. Cooling is directly related to the location’s ambient
temperature and humidity. A location with moderate climate can mean a significant amount of
savings over the energy cost.
Retrofitting
Once we are done with the selection of location for the data centre, the next phase is the
construction of that data centre. However, always constructing a new building is not an only
option. A data centre can be built within an existing structure, which is called as
retrofitting. There are some infrastructure decisions that you should take while planning to
construct a data centre in an existing structure:
• The first two floors of the building may be used for the data centre.
• Easy access of emergency vehicles (e.g.: Ambulances and fire trucks) is also an
important requirement for developing a data centre in an existing building.
Now let us look into those factors which help to evaluate a building for data centre.
• The height between the floor and ceiling must be adequate. It should be able to
vertically accommodate:
• Sub-floor plenum
• Ceiling plenum
• The sub-floor plenum should be enough for the electrical cables. Also, it should not
obstruct the air flow.
• The building should have sufficient power and redundant electrical grids so that power
is available 24x7.
• The floor should be strong enough to bear the weight of the equipment.
• The exhaust from generators should be far from the air intake source.
• The water and gas pipes should be in good condition and free from leaks.
• The pipes should not run above the equipment as any leak could damage the
equipment.
Tier Standard
With organisations and service providers building their data centre, it becomes important
to measure the uptime that a data centre provides. Compromising on certain design factors while
constructing a data centre can lead to equipment or infrastructure failure resulting in downtime.
A downtime can prove to be very costly for a business. To measure the effectiveness of this in
terms of data availability, the Uptime Institute has created a data centre tier classification
system. The kind of data centre facility to be constructed is determined by the tier on which the
organisation wants the data centre to be on.
• Redundancy provided
Fig. 2.9 Data Centre Tiers
Also, the cost and operational complexities to manage the data centre increases with each level.
So, it is important to understand the features of each tier and determine which tier would be
suitable for the data centre.
This refers to a data centre that provides basic capacity. The data centre have a single
path for distributing power and cooling. The data centre does not have any redundant
components. Since such data centre do not provide redundancy, they do not need to adhere to
strict uptime requirements. A Tier I data centre should provide 99.671% uptime with not more
than 28.8 hours downtime per year. Tier I data centre are typically used by small businesses.
The Tier II data centre provides all the facilities of a Tier I data centre. Additionally, it
should provide partial redundancy through redundant power and cooling components. The
redundant components provide extra security against IT process disruptions due to infrastructure
equipment failure. A Tier II data centre should provide 99.749% uptime with not more than 22
hours downtime per year. Tier II data centre are typically used by mid-sized businesses.
The redundant capacity components include:
A Tier III data centre provides all the facilities that a Tier I and Tier II data centre
provides. Additionally, a Tier III data centre provides multiple distribution paths for power
and cooling. This kind of redundancy enables the data centre to function without any for
replacement or maintenance of equipment.
All IT equipment in the data centre are dual-powered. They require to be protected from
72 hours of power outage. The uptime requirement for such data centre are quite high at
99.982% with not more than 1.6 hours of downtime per year. Tier III data centre 98are
typically used by global organisations.
A Tier IV data centre should provide all the facilities that a Tier I, Tier II and Tier III
provides. Additionally, fault tolerance is added to the site infrastructure topology. With fault
tolerance, the IT operations will continue to function in the event of an equipment failure or
distribution path interruptions. In these data centre, all IT equipment requires to be protected
against 96 hours of power outage.
The uptime requirement for such data centre are the highest at 99.995% with not more
than 2.4 minutes of downtime per year. Tier IV data centre are typically used by large global
organisations with highly sensitive and mission-critical data.