EN01 Data Center Network Overview
EN01 Data Center Network Overview
Layer 2 or above.
• DCN: is a network that provides interconnection between computing units within a DC
and interconnection between computing units in a DC and external egresses of the DC.
• SAN: consists of storage arrays and fibre channel (FC) switches and provides block
storage. A SAN that uses the fibre channel protocol (FCP) is called an FC SAN, and a
SAN that uses the IP protocol is called an IP SAN.
• Distributed storage: is different from centralized storage in terms of the deployment
mode. In distributed storage, data is stored on independent servers (storage nodes) in
a distributed manner. Distributed storage can also be deployed in cloud storage mode.
• In this example,
▫ Internet access zone: transmits traffic generated when users access the Internet,
for example, traffic of online banking services.
▫ Office network access zone: transmits traffic generated when office users access a
campus network.
▫ WAN access zone: connects to the WAN built by an enterprise, as well as the
remote DCN and remote campus network.
▪ Layer 2 traffic is not forwarded along the shortest path. The root bridge has
a bandwidth bottleneck, causing a long forwarding delay.
▫ Compared with STP, DCN 2.0 provides a shorter Layer 2 forwarding path and
delay.
▫ Compared with traditional STP, DCN 2.0 can build a larger Layer 2 network.
• Compared with DCN 2.0, DCN 3.0 greatly improves the scalability and flexibility of
service networks. DCN 3.0:
▫ Uses the scalable spine-leaf architecture where more than two spine nodes can
be used.
• Many enterprises plan PoDs to normalize hardware specifications and facilitate the
modular and standardized deployment of IT infrastructure.
• OpenvSwitch is an open-source Apache2.0 project. It is a vSwitch running on a
virtualization platform (such as KVM and Xen) and is the most mainstream vSwitch in
the industry. The OVS is a distributed virtual switch.
• The OVS provides Layer 2 switching for dynamically changed endpoints to control
access policies, network isolation, and traffic monitoring on virtual networks.
• Characteristics of the spine-leaf architecture:
▫ Each lower-level node (leaf node) connects to all higher-level nodes (spine
nodes) to form a full-mesh topology.
▫ In the standard spine-leaf architecture, leaf nodes are similar to LPUs of modular
switches and are responsible for transmitting external traffic. Spine nodes are
similar to switch fabric units (SFUs) on modular switches and are responsible for
traffic forwarding between leaf nodes.
▫ The number of spine nodes can be expanded to four or more. The maximum
number of spine nodes depends on the number of uplink interfaces on leaf
nodes.
▫ A two-level spine-leaf architecture can be extended to a three-level one to
implement high-speed data exchange between more leaf nodes.
• Xen: is a virtualization technology developed by Ian Pratt of Cambridge University and
is included in the Linux kernel (VMware and OpenVZ are also based on Linux). In the
simplified Xen virtualization mode, device drivers are not required and each virtual user
system is independent of each other, with some functions implemented by service
domains.
• Computing virtualization, storage virtualization, and network virtualization are all
required to implement the complete functions of virtualization.
• vSwitches have various functions. As for the OVS, Layer 2 switching can be provided for
dynamically changing endpoints to control access policies, network isolation, and
traffic monitoring on virtual networks.
• DVSs are also used in Huawei FusionCompute VRM and connect to the controller in
the network virtualization solution.
• Neutron APIs can be invoked by external software and the database is used to store
Neutron data.
• ManageOne is the smallest set of compute, storage, and network resources. For
example, if a user is a VDC and has multiple project teams or service systems,
ManageOne can allocate a project to an independent project team or service system.
This project corresponds to a tenant on iMaster NCE-Fabric. VPC is a concept for
ManageOne and iMaster NCE-Fabric, while VDC is a concept specifically for
ManageOne. Neither of two concepts applies to OpenStack.
• A region is an OpenStack system that has multiple availability zones (AZs).
• An AZ has same compute and storage resources, providing high availability. Different
resources can be allocated to different AZs, for example, common servers and high-
performance servers are allocated to a unique AZ respectively.
• Does an AZ consist of multiple DCs or just some regions in a DC? This depends on the
application scenario of the AZ: an AZ in the public cloud scenario contains multiple
DCs, while an AZ in the private cloud scenario (such as the telco cloud scenario or
financial cloud scenario) contains only some regions of a DC. In this course, the cloud-
network integration solution mainly applies to the private cloud scenario.
• VPCs provide isolated VMs and network environments to meet the network isolation
requirement of different departments.
• VPCs use resources in VDCs. Each VPC belongs to one VDC, and each VDC can have
multiple VPCs.
• Each VPC can provide independent services, such as virtual firewalls, elastic IP
addresses, security groups, firewalls, and NAT gateways.
• The Huawei cloud-network integration solution uses ManageOne as the cloud
management platform and FusionSphere OpenStack as the cloud platform.
• In Linux, the kernel space and user space are separated. The Linux OS and
drivers run in kernel space, whereas applications run in user space.
• Container image:
▫ Packages an application and its dependencies (including all files and directories
of the complete OS).
▫ Contains all dependencies required for application running that is achieved
through image running in the isolated sandbox without any modification or
configuration.
• The API server watcher eavesdrops on Kubernetes objects, including the PoD, service
and ingress, and associates with iMaster NCE-Fabric to configure the physical network.
• The API server watcher integrates the IP address management (IPAM) function to
manage the IP resource pool of the container network.
• FusionStorage Manager (FSM): a management module of FusionStorage, providing
O&M functions including alarm management, service monitoring, operation logging,
and data configuration. In most cases, the FSM is deployed in active/standby mode.
▫ VXLAN uses a 24-bit segment ID known as the VXLAN network identifier (VNID),
which supports up to 16 million multi-tenant networks.
▫ Different terminals (servers/VMs) communicate with each other at Layer 2 across
the IP network.
• The load balancing algorithm determines the server to which an external request is
distributed. Typical load balancing algorithms include:
▫ Round robin: selects the first server in the list for the first request, and traverses
the list one by one in a cyclic way.
▫ Least connections: selects the server with the least number of active connections.
▫ Hash: selects the server to forward packets based on the hash value of source IP
addresses. This mode ensures that requests of a specific user are distributed to
the same server.
▫ Random weight: randomly distributes requests to nodes based on their weights.
For example, if the weight of Node1 is two times that of Node2 and 30 requests
are to be sent to the two nodes, Node1 will receive about 20 requests and Node2
about 10.
• For enterprises with multiple DCs in different regions, GSLB ensures that users access
the nearest DCs based on their locations. GSLB has multiple solutions and this example
describes the DNS-based GSLB solution commonly used in DCs.
• In the GSLB solution, the domain name service provider sets the name server (NS) to
the GSLB device that provides the intelligent DNS resolution function. In this way, the
GSLB device is responsible for resolving the domain name. If GSLB devices are deployed
in multiple places, all of they should be added to the NS to ensure high availability.
GSLB devices can perform health check on backend servers and public IP addresses of
other DCs. Health check results are synchronized between GSLB devices in different
IDCs using proprietary protocols. Then the GSLB devices select the optimal addresses
for the DNS servers based on global load balancing policies and send the addresses to
the user.
• SLB in a DC intelligently forwards service data requests to several, or hundreds or
thousands of backend application servers based on the information contained in the
requests. With load balancing algorithms, the optimal servers are selected based on
predefined policies. This solves the availability and scalability problems of the
application to some extent.
1. A DC comprises a set of complex facilities, including the equipment room, the computer
system and the devices related to the computer system (for example, communications
and storage systems), as well as redundant devices, say, data communication devices,
environmental control devices, monitoring devices, and various security devices.
2. The main difference between a common DC and a cloud DC is that a cloud DC
implements large-scale cloud computing deployment. Cloud DCs are low-carbon and
energy-saving where compute, storage, and network resources are loosely coupled.
3. Key IT services of DCs include but are not limited to cloud computing, virtualization,
container, HPC, and AI.
4. Load balancing falls into three types. They are: 1. GSLB: is based on the domain name
resolution mechanism; 2. HTTP load balancing: involves HTTP redirection and reverse
proxy; 3. Network layer load balancing: works at the network layer and implements
service load balancing by modifying network layer information such as the IP address,
MAC address, and Layer 4 port.
5. DCN 1.0: uses the VRRP + STP mechanism and achieves basic reliability. A VRRP group
is configured to work in master/backup mode. STP is configured to eliminate loops,
leading to low utilization.
▫ DCN 2.0: uses stack/M-LAG and supports inter-device link binding for server
access and full load balancing among links. The network has a limited scale,
supporting only two aggregation devices. East-west Layer 3 traffic is transmitted
over a non-optimal path.
▫ DCN 3.0: uses a spine-leaf architecture and VXLAN EVPN. It supports full load
balancing among links. Four or more spine nodes are supported and east-west
Layer 3 traffic is transmitted over an optimal path.