0% found this document useful (0 votes)
28 views92 pages

Module 7 - Modern Architectures

For Master of Engineering in Internetworking Program

Uploaded by

Dipenker Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views92 pages

Module 7 - Modern Architectures

For Master of Engineering in Internetworking Program

Uploaded by

Dipenker Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

An Introduction to the Modern Network

Architectures

Ref: SDN and NFV Simplified by Jim Doherty and Various Internet Resources
Contents to be Covered
The following contents to be covered:

• Virtualization: Virtualization, Benefits of VM, Hypervisors, Types of Hypervisors


and Managing Virtual Resources
• Virtualized Data Centers: Networking Gears in Virtualized Data Centers (Clouds),
VM connectivity, and Hypervisor Options (VMware, VSphere, VMotion and VXLAN).
• Network Function Virtualization: Virtualizing a Network, and Core Networking
Functions to be Virtualized.
• SDN: Concept, Architecture, Abstractions, Components, Controllers, and
Applications (in a separate ppt file ~ already covered)
• IoT Architecture: A comparative look at various industry driven reference
architectures
Part 1: Virtualization
• Purchasing and deploying a new server for every new application
• Virtualization is based on the sharing of under-utilized resources
across a range of IT applications.
• Not a new concept: Mainframes, VLANs, Logical partitions of HDD
• New: VM is an isolated OS and applications alike a physical computer
Physical and Virtual Machine Architectures

Physical Architecture Virtual Architecture

Application

Operating System vSphere

x64 Architecture x64 Architecture


Physical Resource Sharing

Virtual
Resources

vSphere
x64
Architecture
Physical
Resources
About Virtual Machine Virtual Hardware

2 IDE Up to 3 Up to 32 VMCI AHCI


Parallel Ports Serial/Com ports Controller Controller
Controller
Devices 1 USB
Controller
20 Devices
Up to 10
NICs
1 Floppy Controller
2 Devices
Hardware Virtual Machine
3D
Up to 4 SCSI
Adapters

Up to
4 TB of RAM
15 Devices
per Adapter
Up to 128 vCPUs
Physical and Virtual Networking

• Virtual Ethernet adapters and virtual switches are key


virtual networking components.
Physical Architecture Virtual Architecture

Application

Operating System
Virtual Switch

x64 Architecture
vSphere
x64 Architecture
Physical File Systems and VMFS
• VMware vSphere® VMFS enables a distributed
storage architecture, allowing multiple ESXi hosts to
read or write to the shared storage concurrently.
Physical Architecture Virtual Architecture

Application

Operating System vSphere vSphere


x64 Architecture x64 Architecture x64 Architecture

Shared Storage: VMFS, NFS,


NTFS, ext4, UFS Virtual SAN
Encapsulation
VM 1 • Virtual machine files are stored in
directories on a VMFS or NFS
datastore.

1
VM 2 VM

2
VM

3
VM

VM 3

Datastore: VMFS or NFS


Why Virtualization?
• Data centric world
• Data proliferation: needs backups and double backups
• Increased business application dependence
• Every new application needs its own server
• Server proliferation: waste of resources (power, money, workplace
and workforce). Utilization of a server 5 to 15% only
• About 90% of the resource is wasted.
VM on Underutilized Server Problems
• Nature of Servers and complicated applications (OS: Application ratio)
• OS is coupled to the servers because server resources can be
accessed through drivers
• One physical machine can house a number of different OS
• 1:1 ratio is broken
• Applications decoupled from server HW
• OS and applications are still tied together
• Quick and easy to move from one server to another
Benefits of Server Virtualization (1 of 2)
• Reduced Cost: greater utilization of server resources
• Less Space: multiple virtual servers in one Physical Server. Reduction
in power consumption and cabling
• Availability and Accessibility:
• Mission-Critical Apps on clustered or fault-tolerant HW with
complex failover protocols traditionally
• If server with many VMs fails, VMs continue to run or restart on
another server with no downtime or data loss through heartbeat
function
• Easier Access for Development: ordering for physical versus
downloading VM
Benefits of Server Virtualization (2 of 2)
• Quick App Spin-Up:
• Earlier building, testing, developing, and publishing servers
sometimes need physical resources to be recycled
• Formatting a server and reconfiguring for another OS, development
environment, and libraries
• Ghosting image helped, but swapping test environ is time consuming
• Management: allocate, regulate, and enable fine tuning of network
BW, memory, and CPU %
• VM’s view of the underlying HW is standard
• For speedy VM recovery, VM’s copied to SAN periodically.
Role of Hypervisor
• A layer between the OS and the hardware resources
• Manages the virtual connection between the drivers and the server’s
resources and provides an interface to OS on the top
• Does so for many VMs/OS with unrelated applications
• VMs and hypervisors provide server consolidation
• 10x to 15x reduction in Physical Servers with increased flexibility
Hypervisors
• VM on the front to interact with
• Hypervisor is the thing that is doing the virtualization
• An OS for operating systems
• Hyp. allows HW devices to share their resources among VMs
• Hyp. sits directly on top of the HW
• No server OS is loaded
• Interacts directly with the guest VMs
• A VM monitor
• A monitoring function for each VM to manage the access requests and
info flow from VMs to computing resources and vice versa
• HW access through multiplexing process that is transparent to VMs
• Multiplexing is an OS feature
Types of Hypervisors
• Hyp. carries I/O commands and Interrupts from virtualized OS to HW
• Metering of both the usage of resources and network access
• Setting up traps to stop impact of errors in one guest VM from disturbing
other VMs, Hyp., and HW

• Bare-metal Hypervisor runs directly on server HW without any OS. Offers


better performance
• Hosted Hypervisor: runs on top of a native OS. Easier to install. Due to
middle layer, the performance is less than the B M Hyp. Greater flexibility
and easy transition to virtualization
Options of Hypervisor Vendors
• KVM (Kernel based VM) (now owned by Red Hat): hosted Hyp and is a
part of the Linux kernel
• Xen (acquired by Citrix): An open source bare-metal Hyp. Uses para-
virtualization to tell the guest OS that no dedicated HW (eliminates
the need for the virtual machine to trap privileged instructions).
Requires modifications in guest OS
• VMWare ESXi (vSphere): VMware Fusion is hosted Hyp and vSphere is
a bare-metal Hyp.
• Microsoft Hyper-V: Has many versions. Hosted Hyp
Managing Virtual Resources
• Common administrative tasks:
• Creating VMs, from scratch or from templates
• Starting, suspending, and migrating VMs
• Using snapshots to back up and restore VMs
• Importing and exporting VMs
• Converting VMs from foreign hypervisors
• Spinning up a virtual server is a trivial task now: Key parameters are
physical server’s name, a VM name, memory allocation to VM, no. of
cores and sockets assigned to a VM, a default display type and remote
access protocol, and the type/version of OS
Workload in Virtualization
• Not only a program or an application
• It is a collection of computing “stuff” required to run an application
• It includes OS burden, compute cycles, memory, storage, and network
connectivity
• May also include network or storage components that are not even
permanently attached to a server
• Virtualization breaks up various resources into pools to create very
efficient workload and leads to Cloud networking to stitch these
workloads quickly and reliably
Managing Virtual Resources in Hypervisors
Once VMs are up and running, control and management of virtual resources
is done by a hypervisor:
• Controlling the multiplexing of VM requests among VMs
• Control of I/O interrupts
• Information flow between the guest OS and the HW
Need to consider the following facts:
• Correlation between virtual and real resources
• Hypervisor thinks it has infinite resources available. Both (V&R) resources
should be managed to 70% to 80%
• Resource management becomes challenging when VMs move while in use
from one physical host to another physical host
INWK: A Use Case: Unified Cluster of Blade Servers
INWK: A Use Case: A Single Blade Server
INWK: A Use Case: VM Folders
INWK: A Use Case: A Single VM
INWK: A Use Case: Various VMs
INWK: A Use Case: Some Facts
From a Unified Cluster of 4 Blade Servers: 482 VMS

Cost of Setup ~ $80,000 License Fee Ignored

Virtual Setup Physical Machines

Price: $80, 000 gave approx. 500 VMs $2000 * 500 = $1000,000

Power: 2000 Watt 500 *500 = 250,000 Watt

Air Conditioning and Space: 4 Machines 500 Machines

Management: One Person


Part 2: Virtualized Data Centers (a step towards Clouds)

• Previous contents covered a general concept of virtualization


technology.
• Part 2 puts these concepts into a practical scenario.
• Drives the benefits of virtualization by applying the techniques in a
modern data center.
• Shows how virtualization can bring in business benefits?
• Advantages of cost/benefit exercise
Benefits of Virtualizing a Data Center
1. Less Heat Buildup
2. Reduced Hardware Spending
3. Faster Deployment
4. Testing and Development
5. Faster Redeployment
6. Easier Backups
7. Disaster Recovery
8. Server Standardization
9. Separation of Services
10. Easier Migration to Cloud
Is Virtualized Data Center a Cloud Yet?
• Virtualizing a Data Center does not become a Cloud Environment
• Virtualization is only the foundation of Cloud computing
• Virtualization is a technology based on software
• Cloud refers to all the services built upon a virtualized infrastructure
• Cloud is the delivery of shared computing resources:
Data, Software, or Infrastructure as a service
• Virtualized DC and Cloud are tightly integrated (difficult to tell them
apart)
• Both confuse some people, but they deliver different types
of services
Five Cloud Attributes by NIST

1. On-demand Self-service
2. Ubiquitous network access
3. Pay per use (metered use)
4. Rapid elasticity
5. Location-independent resource pooling
Types of Clouds: Software as Service (SaaS)
Users access an application remotely via a web browser, installed
software, or a “thin client”(desktop architectures like Citrix or VMWare
Virtual desktop infrastructure (VDI)). Clients get only limited admin
control
SaaS Advantages:
1. Connecting to an application from anywhere and from multiple devices
2. Not having to manage stored data, software upgrades, or OS
3. Reduction in management resources and technical personnel
Downsides of SaaS:
1. Requirement of Internet connectivity
2. The loss of 100% control of your data. Makes switching from one application
provider to another easy
Types of Clouds: Infrastructure as a Service (IaaS)
• A client rents compute resources to have own software programs
including OS and applications
• A client also controls storage of their data
• Has some control over network functions and security (firewalls)
• Rest of control remains with the provider
• Basically, the client rents a server (or servers) on which it can install
its own programs
• A common model with enterprise-class clients. Smaller organizations
can also get enterprise-class computing that requires a very high
setup cost at a fraction
Types of Clouds: Infrastructure as a Service (IaaS)
IaaS Advantages:
• A significant reduction (or complete elimination) of start up and ongoing IT cost
• Use of multiple OS without losing flexibility to switch among them
IaaS Downsides:
• Security concerns: when using a multitenant cloud for sensitive or regulated data
• Loss of Physical Control of Data
• Lack of visibility into cloud network traffic
Provider maintains the control of the hypervisor and HW
Client gets control of application, middleware, and OS
Types of Clouds: Platform as a Service (PaaS)
• Client is provided a computing platform upon which they can develop
and run their own applications
• Clients lets the provider know what programming tools will be used
• Client maintains control of the computing/development environment
(application and middleware)
• Provider delivers programming libraries and maintains control of OS
and HW.
• The underlying network infrastructure (servers, storage, networking)
is managed by the provider
Types of Clouds: Platform as a Service (PaaS)
Advantages of PaaS:
• No IT team for the development environment
• Just developers to do coding
• Cost effective and easy to port programs or systems to new platforms

Downsides of PaaS:
• Physically having your code treasure outside of your four walls
• Security and secrecy
Cloud Deployment Models
• Private Clouds: Provides 5 attributes in an enterprises data center

• Shared Multitenant Clouds: Hosted services that cater to select business


clients

• Public Clouds: Enterprise-class computing is made available to masses. Less


available (99.9%) than traditional enterprise IT app (99.999%)

• Hybrid Clouds: Rapidly in change. Catchall. Allows seamless access to


private or public through a single interface
About Private Clouds
Private clouds are pools of resources dedicated to a single enterprise.

Internet Advantages:

• Self-service provisioning
• Elasticity of resources
Enterprise Private Cloud • Rapid and simplified
provisioning
• Secured multitenancy
• Improved use of IT
resources
• Better control of IT budgets

Gizmo Division Widget Division Human Sales


Resources
About Public Clouds
In their infrastructure public cloud service, providers host many types of IT
operations for multiple businesses.

Advantages:

• Customer management of IT Cloud Service Provider


• Rapid and flexible deployments
• Efficient and cost-effective
deployments
• Secure IT assets
• Capital expenses converted to
operating expenses
Company Company Company
A B C
Hybrid Cloud

Private Hybrid Public


Cloud Cloud Cloud
About Hybrid Clouds
IT assets are housed both internally on customer premises and in public
clouds.
Hybrid
Cloud
App App App
Loads Loads
Loads
Management Management
Bridge
vSphere vSphere

Private Public
Clouds Clouds

Use Cases
Disaster recovery Traffic overflow
Quick provisioning Offsite backup
Data archiving Development / QA / test
Virtual Machine Connectivity in Data Centers
• No longer have one OS
• Applications along with all the memory and storage residing on a
single server
• In virtualized data centers, there are many instances of
OS/applications residing on a single host.
• Applications and software may be on different severs at different
times
• How to connect all the pieces, when they are in a near constant state
of flux?
Networking in Traditional Data Centers

• Just manages North-South traffic flows


• Through the traditional hierarchical three-tiered model consisting of
core, aggregation, and access levels
• This ensures that data is collected, aggregated, and transported to the
destination at higher speed
• Almost no server-to-server traffic
Networking in Traditional Data Centers
Client initiates an application call through a thin client or a browser
This user call points to the address of the server where the application is
Handling an incoming packet:
1- The message arrives via a core router and is fast routed through the
core
2- Goes through high-port density access switches via aggregation layer
3- The packet is examined for the destination address at the access layer
4- Fast switched across the wire to the required server where the app is
hosted (based on server’s unique address)
Networking in Traditional Data Centers
Role of addressing:
1- Layer-3 address is used to get the packet across the wide-area network to
right data center location.
2- Once there, the layer 2 address tells the switches in the datacenter to which
server the traffic should go.
Observations:
1- App is associated to a single server
2- Layer 3 address of server being associated to a subnet is location dependent
whereas user location has no conditions associated
3- Based on MAC address, an access switch will fire the packet down via the
correct VLAN to destination.
4- After the NIC of server, the TCP port will determine to which application to
send the packet.
A Comparison of three tier and Spine-Leaf Architectures Part 1
Ref:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAAegQIA
BAC&url=https%3A%2F%2Flenovopress.com%2Flp0573.pdf&usg=AOvVaw1GXUHwG0Rw_Nz52pUnOcRU

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAKegQI
AhAB&url=http%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fproducts%2Fcollateral%2Fswitches%2Fnexus-7000-series-switches%2Fwhite-
paper-c11-737022.pdf&usg=AOvVaw2UN9lepZOO3pcao9spfaVe

https://blog.westmonroepartners.com/a-beginners-guide-to-understanding-the-leaf-spine-network-topology/

https://blog.mellanox.com/2018/04/why-leaf-spine-networks-taking-off/

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=27&cad=rja&uact=8&ved=2ahUKEwj27cD_xqjkAhXCm-
AKHU8gCoU4FBAWMAZ6BAgFEAI&url=https%3A%2F%2Fconferences.heanet.ie%2F2015%2Ffiles%2F181%2FSean%2520Flack%2520-
%2520Arista%2520-%2520L3%2520leaf%2520spine%2520networks%2520and%2520VXLAN.pdf&usg=AOvVaw18Cpu2bGmQfH0mQnd8ZI21

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=33&cad=rja&uact=8&ved=2ahUKEwizxtiux6jkAhWBMd8KHY7GBaM4HhAWMAJ
6BAgCEAI&url=https%3A%2F%2Fwww.cs.unc.edu%2Fxcms%2Fwpfiles%2F50th-symp%2FMoorthy.pdf&usg=AOvVaw19ILk2axt-dG-Lndiz3rlf

https://tools.ietf.org/html/rfc2992

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAAegQIAB
AC&url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fdam%2Fen%2Fus%2Ftd%2Fdocs%2Fswitches%2Fdatacenter%2Fsw%2Fdesign%2Fvpc_design
%2Fvpc_best_practices_design_guide.pdf&usg=AOvVaw1rUhkwFxu9QsHoEAkHQOND

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAKegQIAh
AC&url=https%3A%2F%2Fwww.ciscolive.com%2Fc%2Fdam%2Fr%2Fciscolive%2Femea%2Fdocs%2F2016%2Fpdf%2FBRKDCT-
2378.pdf&usg=AOvVaw20_YyjWcfyPYJm0I7nfWrY
Three Tiered Architecture
1. Core – Layer 3 (L3) routers providing separation of the pods
2. Aggregation – Layer 2/3 (L2/3) switches which serve as boundaries between the pods.
3. Access – Layer 2 (L2) switches providing loop free pod designs utilizing either spanning
tree protocol or virtual link aggregation (VLAG)
Matches with traditional server
functions which require east-west traffic
within the pod and limited North/South
traffic across pods through the core
network.

Difficulty arises with an increased


latency for pod-to-pod (east-west)
traffic.
Advantages of Three Tiered Architecture
This architecture has distinct benefits including:

• Availability – if a pod is down due to equipment or some other failure, it can be


easily isolated to a branch (pod) without affecting other branches (pods)
• Security – processes and data can be isolated in pods limiting exposure risks
• Performance – traffic within the pod is reduced so oversubscription is minimized
• Scalability – if a pod becomes oversubscribed, it is a simple task to add another
pod and load-balance traffic across them and improving application performance
• Simplicity –network issues caused by leaf devices are simplified because the
number of devices in each branch are limited
Disadvantages of the three-tier architecture
Software defined infrastructures are requiring changes in the traditional network
architecture demanding expanded east-west traffic flows.

The major software defined applications driving this are virtualization and
convergence.
Virtualization requires moving workloads across multiple devices which share
common backend information.
Convergence requires storage traffic between devices on the same network
segment.

These applications also drive increased bandwidth utilization which is difficult to


expand across the multiple layered network devices in the three-tier architecture.

This leads to the core network devices being very expensive high-speed links.
Networking in Virtual Data Centers

• An application (and OS) is not associated with a physical server


• Supports multiple app-OS combo running on a physical server
• Servers and VMs are all interchangeable
• Different parts of an application (compute power, memory, storage)
may all be in different places
• Consequently, network addressing is a huge concern with
virtualization
Networking in Virtual Data Centers (1 of 4)
Virtual Data Center Design Addressing Challenges:
• How to establish and maintain addressing when the VMs share
common physical devices
and yet
are prone to moving from session to session?

• Changing the physical layout and performance characteristics of the


data center networking devices to better accommodate the new
requirements
Networking in Virtual Data Centers (2 of 4)
Addressing with Virtual Machines:
• Assumptions: Each VM runs one app on one OS and one server may
have multiple VMs
• A VM may need to communicate with:
1- Another VM on the same server
2- Another VM on a different server. May be any virtual resource (such
as storage) within the same data center
3- Another host outside of the data center. May be client or another
data center
Networking in Virtual Data Centers (3 of 4)
• Each host needs a unique MAC by manufacturer and a Layer 3 address
• VMs do not roll off a factory line and are created out of thin air
• VM software (VMSphere or Citrix) provide a unique MAC for each VM
• These VM managers also provide a vNIC or multiple vNICs
• VM MAC is inherent to a VM and is independent of the physical servers’
NIC.
• Free migration over to another server without any restriction
• The OS is assigned an IP address and VM is identified by the MAC
Networking in Virtual Data Centers (4 of 4)
• Virtualization extends the network as layer 2 overlay network.
• IP addresses are no more a concern as there is only one BC domain
• VMs can communicate by ARP to obtain the MAC address of other
hosts across the shared BC domain alike on a real LAN
• Virtualization builds a virtual LAN as an overlay on top of the physical
layer 2/3 network
• Virtual LAN overlay enables VMs to migrate across the data center as
if they were on LAN segment
• They can do this even though they might eventually be in different
subnets.
Network Gears in Virtualized Data Centers
A Hypervisor plays the role of a virtual switch in the hypervisor software
layer
and
Interacts with the network controller to assign and manage layer 2
switching identifiers.
The data center access switches were oblivious to these new networking
elements
and
They continued to operate as before using the ARP to update their
switching tables.
The Evolution of Data Center Switching
The new generation of switches for:
• Increased volume of intra-data center (cloud) traffic
• Changes in the nature of the traffic
Two changes in traffic due to virtualization:
• The traffic density has increased due to several VMs running on a single
server
• The hypervisor is itself a kind of virtual switch
1. Hypervisor works as the access layer of switches in the N-tiered model
2. But, they are largely absent from many monitoring and management
platforms
3. The first physical layer switch sees much higher traffic and port density
Data Center Switching: Traffic Patterns
• Traffic patterns have shifted from predominantly client (outside) to server
(inside) and vice versa (North-South)
• The traffic shifted to server to server (East-West) direction
• Workflows from VM to VM has increased
• The prevalence of independent resource pools increases
• A server is NOT doing it all now
• An application draws resources from clusters of specialized resources
within Cloud
• With fixed servers, the traffic from server to server uses the pattern of hair-
pinning
• Changes in IT workforce line-up
Cloud and Data Center Layout and Architecture

• N-tiered model is still used in virtualized data centers and clouds, but
due to traffic changes, this model is put together differently:
• The first access switch is now the hypervisor (a virtualized switch)
• Hypervisor supports virtual LANs
• First physical switch now is TOR and EOR to aggregate up to core
switches
• This spine and leaf architecture scales well horizontally (East-West)
without giving up vertical (North-South) performance
A Comparison of three tier and Spine-Leaf Architectures Part 2
Ref:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAAegQIA
BAC&url=https%3A%2F%2Flenovopress.com%2Flp0573.pdf&usg=AOvVaw1GXUHwG0Rw_Nz52pUnOcRU

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAKegQI
AhAB&url=http%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fproducts%2Fcollateral%2Fswitches%2Fnexus-7000-series-switches%2Fwhite-
paper-c11-737022.pdf&usg=AOvVaw2UN9lepZOO3pcao9spfaVe

https://blog.westmonroepartners.com/a-beginners-guide-to-understanding-the-leaf-spine-network-topology/

https://blog.mellanox.com/2018/04/why-leaf-spine-networks-taking-off/

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=27&cad=rja&uact=8&ved=2ahUKEwj27cD_xqjkAhXCm-
AKHU8gCoU4FBAWMAZ6BAgFEAI&url=https%3A%2F%2Fconferences.heanet.ie%2F2015%2Ffiles%2F181%2FSean%2520Flack%2520-
%2520Arista%2520-%2520L3%2520leaf%2520spine%2520networks%2520and%2520VXLAN.pdf&usg=AOvVaw18Cpu2bGmQfH0mQnd8ZI21

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=33&cad=rja&uact=8&ved=2ahUKEwizxtiux6jkAhWBMd8KHY7GBaM4HhAWMAJ
6BAgCEAI&url=https%3A%2F%2Fwww.cs.unc.edu%2Fxcms%2Fwpfiles%2F50th-symp%2FMoorthy.pdf&usg=AOvVaw19ILk2axt-dG-Lndiz3rlf

https://tools.ietf.org/html/rfc2992

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAAegQIAB
AC&url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fdam%2Fen%2Fus%2Ftd%2Fdocs%2Fswitches%2Fdatacenter%2Fsw%2Fdesign%2Fvpc_design
%2Fvpc_best_practices_design_guide.pdf&usg=AOvVaw1rUhkwFxu9QsHoEAkHQOND

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAKegQIAh
AC&url=https%3A%2F%2Fwww.ciscolive.com%2Fc%2Fdam%2Fr%2Fciscolive%2Femea%2Fdocs%2F2016%2Fpdf%2FBRKDCT-
2378.pdf&usg=AOvVaw20_YyjWcfyPYJm0I7nfWrY
Spine-Leaf Architecture
Larger east-west traffic drives the need for a network architecture with an
expanded flat east-west domain like spine-leaf.

Solutions like VMware NSX, OpenStack and others that distribute workloads to
virtual machines running on many overlay networks running on top of a traditional
underlay (physical) network require mobility across the flatter east-west domain.
In spine-leaf every leaf switch is connected to each of the spine switch in a full-mesh
topology.
The spine-leaf mesh can be implemented using either Layer 2 or 3 technologies
depending on the capabilities available in the networking switches.
Layer 3 spine-leaf(s) require that each link is routed and is normally implemented using
Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) dynamic routing with
equal cost multi-path routing (ECMP).
Layer 2 utilizes a loop-free Ethernet fabric technology such as Transparent
Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB).
Equal-cost multi-path (ECMP)
The ECMP feature allows OSPF to add routes with multiple next-hop addresses
and with equal costs to a given destination in the forwarding information base (FIB)
on the routing switch.
For example, show ip route shows multiple next-hop routers listed for the
same destination network (21.0.9.0/24)
Multiple ECMP next-hop
routes cannot be a
mixture of intra-area,
inter-area, and external
routes.

In this example, the


multiple next-hop routes
to network 21.0.9.0/24
are all intra-area.
Equal-cost multi-path (ECMP)
As per distributed algorithm used in the selection of ECMP next-hop routes:
Intra-area routes are preferred to inter-area routes.
Inter-area routes are preferred to external routes through a neighboring AS.
In addition, ECMP ensures that all traffic forwarded to a given host address follows the same path, which is selected
from the possible next-hop routes.
For example, in example of OSPF ECMP multiple next-hop routing (inter-area), the ECMP inter-area routes to
destination network 10.10.10.0/24 consist of the following next-hop gateway addresses: 12.0.9.2, 13.0.9.3, and
14.0.9.4.

The forwarding software distributes traffic


across the three possible next-hop routes in
such a way that all traffic for a specific host is
sent to the same next-hop route

Example of OSPF ECMP multiple next-hop routing (inter-area)


TRILL
Transparent Interconnection of Lots of Links (TRILL) applies network layer routing
protocols to the link layer and -- with knowledge of the entire network -- uses that
information to support Layer 2 multipathing.
This enables multi-hop Fiber Channel over Ethernet (FCoE), reduces latency and
improves overall network bandwidth utilization.

TRILL is meant to replace the spanning tree protocol (STP).


STP, which was created to prevent bridge loops, only allows one path between network
switches or ports.
When a network segment goes down, an alternate path is chosen, and this process can
cause unacceptable delays in a data center network.

TRILL is designed to address this problem by applying the Intermediate System-to-


Intermediate System protocol (IS-IS) Layer 3 routing protocol to Layer 2 devices. This
essentially allows Layer 2 devices to route Ethernet frames.
SPB
802.1aq is (IEEE) sanctioned link state Ethernet control plane for all IEEE VLANs covered
in IEEE 802.1Q.

Shortest Path Bridging (SPB) virtual local area network identifier (VLAN ID) or Shortest
Path Bridging VID (SPBV) provides capability that is backwards compatible with spanning
tree technologies.

SPB (MAC) or (SPBM), (previously known as Provider Backbone Bridge PBB) provides
additional values which capitalize on Provider Backbone Bridge (PBB) capabilities. SPB
(the generic term for both) combines an Ethernet data path (either IEEE 802.1Q in the
case of SPBV, or Provider Backbone Bridges (PBBs) IEEE 802.1ah in the case of SPBM)
with an IS-IS link state control protocol running between Shortest Path bridges
(network-to-network interface (NNI) links). The link state protocol is used to discover
and advertise the network topology and compute shortest path trees (SPT) from all
bridges in the SPT Region.
Spine-Leaf Architecture
The core network is also connected to the
spine with Layer 3 using a dynamic routing
protocol with ECMP.
Redundant connections to each spine
switch are not required but highly
recommended.
This minimizes the risk of overloading the
links on the spine-leaf fabric.
This architecture provides a connection
through the spine with a single hop
between leaf(s) minimizing any latency and
bottle necks.
The spine can be expanded or decreased
depending on the data throughput
required.
Spine-Leaf Another Look

Spine-Leaf networks scale very simply

just adding switches incrementally as


growth is needed
Next Level Spine-Leaf
For larger network, deploy these leaf/spine switches in “Pods”.
In each Pod, reserve half of the spine ports for connecting to a super-spine for non-
blocking connectivity between Pods.

A best practice in leaf/spine


topologies with a lot of east/west
traffic is to keep everything non-
blocking above the leaf switches.
Advantages of the Spine-leaf Architecture

The spine-leaf architecture is optimized for east-west traffic that is required by most
software defined solutions.

The advantages of this approach are:

All interconnections are used and there is no need for STP to block loops. All east-
west traffic is equidistant, so traffic flow has deterministic latency.

Switch configuration is fixed so that no network changes are required for a dynamic
server environment
Disadvantages of the Spine-leaf Architecture
$$$$ The number of cables and network equipment required to scale the bandwidth since
each leaf must be connected to every spine device. This can lead to more expensive spine
switches with high port counts.

The number of hosts that can be supported can be limited due to spine port counts
restricting the number of leaf switch connections.

Oversubscription of the spine-leaf connections can occur due to a limited number of spine
connections available on the leaf switches (typically 4 to 6). Generally, no more than a 5:1
oversubscription ratio between the leaf and spine is considered acceptable but this is highly
dependent upon the amount of traffic in your environment.

Oversubscription of the links out of the spine-leaf domain to the core should also be
considered. Since this architecture is optimized for east-west traffic as opposed to north-
south, oversubscriptions of 100:1 may be considered acceptable.
Cloud and Data Center Layout and Architecture:
Increasing the Rate of Change
• The rate of change is increasing due to:
- Increasing processing power on the servers
- Increasing # of very fast multicore processes (Moor’s Law) ~more VMs
- Needs faster switches with greater port densities located closer to servers
- I/O functions within virtual environment becomes the chokepoint on
performance rather than processing speed
- Going towards homogenization of switching speed in DC near to 100 Gbps
- VM-aware switches: resulting in flat, virtual aware, blazing fast DC and
Clouds
Drawbacks of Hypervisor vSwitch
• No control plane to speak to and is unable to update the physical switch
to support VM mobility
• To facilitate this, many VLANs to be configured on the server-facing ports
of the physical switch
This causes:
• Unnecessary and indiscriminate flooding, multicasts, and unknown
unicast
• Increased uplink utilization and CPU cycles
• Results in dropped packets
• If DCs have many small BC domains, VLANs configuration becomes an
issue
Virtualized Aware Network Switches
• VM-aware switch can learn the VM network topology via a discovery
protocol to:
interact with vSwitches and build a map of virtualized network
• VM-aware switches provide the visibility to vSwitches that are hidden
from view on network monitoring tools
Allows network administrators:
• To measure and troubleshoot network traffic per VM
• To configure the network parameters of VMs
• Track VMs as they migrate within the network
• To reduces complexity by requiring no additional server software or
changes to hypervisors or VMs
Virtualization: Lower level Details
VMware Product Design Review:
• Early 2000s VMware workstation and GSX freeware (VM Player)
• Since 2001, VMware aimed at the server market with GSX server (type 2) and the ESX
Server (type 1) hypervisor products
• VM hypervisors provide a virtualized set of hardware for a video adapter, a network,
and a disk
• With pass-through drivers to support guest USB, serial, and parallel devices
• VMware VMs are highly portable and run on any HW
• Ability to: - Pause OS running on VM guest
- Move to another physical location
- Resume at the same point where operation was paused before moving
vSphere
• Type-1 bare metal product where VM Server is type-2
• vCenter Server is a part of VSphere family to provide a single point of
control and management for DCs
• The other components of VSphere:
--- Infrastructure Services: to provide abstraction and aggregation, and
allocate HW resources through the components vCompute,
vStorage, and vNetwork services
--- Application Services: high-availability and fault-tolerance services to
provide availability, security, and scalability to apps
--- Clients: IT admin users access vShpere through vSphere client or a
web access through a browser
Components of vSphere
• ESX and ESXi. ESX has a built-in service console. ESXi comes with an installable
or an embedded version.
• Virtual Machine File System (VMFS): High performance cluster FS
• Virtual SMP: allows a single VM to use multiple CPUs simultaneously
• vMotion: enables live migration of running VM
• Distributed Resource Scheduler: allocates and balances computing capacity
across the HW resource for VMs
• Consolidated Backup: agent free backup of VMs
• Fault Tolerance: creation of a secondary copy of a VM. All action on primary also
goes to secondary
• vNetwork Distributed Switch: a distributed virtual switch (DVS) that spans
ESX/ESXi hosts to reduce network maintenance and increase capacity
• Pluggable Storage Array: a storage plug-in framework to provide multipath load-
balancing to enhance storage power
vMotion
• Migrates a running VM from one to another physical resource with
zero down time, continuous service availability, and complete
transaction integrity
• Small caveat: can migrate VMs within the same data center
• vMotion supports moving virtual disks or configuration files of a
powered up VM to a new data store
• A frozen or suspended VM can be moved across DCs, but an active
VM can only be moved within a DC.
• vMotion and Storage vMotion (storage admin function) are two
different functions.
Distributed Power Management (DPM)
Distributed Resource Scheduler (DRS)
VXLAN
• Same as VLANs, but with extension to address common VLAN failings
• Limited VLAN ID was 8 bits with 4096 unique addresses
• IP/MPLS core network hosted Layer 3 VPN services and used other techniques to
alleviate capacity problem
• VXLAN: 24 address bits with 16 million VXLAN IDs (enough for a larger DC)
• VLAN uses STP for loop prevention and undesirably shuts down many ports
• VXLAN tunnels L2 (Ethernet) frames across L3 IP network via encapsulating MAC
in UDP to tunnel through IP
• VXLAN attaches a L3 header to L2 frame and then encapsulate it within a UDP-IP
packet
• Allows massive L2 DC traffic through L3 that is easy to manage efficient data
transfer
VXLAN Tunnel Endpoints
• VXLAN uses VXLAN Tunnel Endpoint (VTEP) devices to:
--- map tenants and end devices to VXLAN segments
--- perform encapsulation/de-capsulation
• Each VTEP has two interfaces:
--- a switch port on local LAN
--- an interface on the transport IP network
• IP interface has a unique IP address to identify VTEP device on transport
IP network (called infrastructure VLAN)
• A VTEP device discovers remote VTEPs for its VXLAN segment and learns
the remote MAC address to VTEP mapping
• VTEPs are used on physical switches to bridge together virtual network
and physical network segments
Ref: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.html
Ref: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.html
Part 3: Network Functions Virtualization
• Server Virtualization through VMs
• Network Virtualization
--- VLAN --- MPLS --- VXLAN
• Network Functions Virtualization
--- Refers to the virtualization of L4 to L7 services such as load balancing
and firewalling
--- NFV came about because of inefficiencies created by virtualization
--- Such as routing of traffic to and from network appliances at the edge of
DC
--- With VMs spinning up and being moved all over, the highly varied traffic
flows becomes problem for fixed appliances
--- NFV creates a virtual instance of functions such as firewall, which can be
spun up and placed where it is needed alike a VM
Virtualizing Appliances
• Application communication is controlled at L4 through L7 of OSI model
• These layers also include information vital to many network functions
such as security, load balancing, and optimization
• Examples of L4 through L7 services include:
data loss prevention systems
firewalls and intrusion detection systems (IDS)
load balancers security events and information management (SEIM)
Secure Socket Layer (SSL) accelerators VPN concentrators
• Challenges to keep up with the ever-increasing speed and flexibility of
DC/Clouds
• To keep up with the wire speed and constantly changing environment
without dropping packets
Ref: https://www.theregister.co.uk/2013/12/09/feature_network_function_virtualisation/
Some L4 to L7 DC tools and Services
• Firewall:
--- Allows authorized traffic and blocks unauthorized traffic.
--- Can be located at the edge of the DC or close to server.
--- Application-aware firewalls in fashion to give net admin a better
visibility and control over security
• SSL Offload:
--- SSL is a web-based encryption tool
--- Provides security of web-based data stream without user intervention
--- SSL offload service provides a termination point for encryption
Virtualizing Appliances
To keep up with the speed of networks:
--- most of the appliances lived at the edge of the network
--- this allows fewer (bigger and faster) appliances to be located close
to WAN links where they are more manageable
--- instead of remaining idle as an expensive resource under
redundancy until there is an emergency, load balancers ensure
max efficiency and traffic performance
Some L4 to L7 DC tools and Services: Load Balancer
• Directs and spreads incoming traffic to DC/Cloud
• Controls the flow of traffic when apps scale up and down
• Unlike old days scalability meaning growing over time
• An architectural consideration now is an automatic function enacting
real-time changes
• Classic design of L4-L7 was for static environment, but now VMs can be
spun up or transferred both from session to session or in a live session.
• For virtualization, L4-L7 services require stateful information about the
apps
• This info need to be shared across the network to ensure the provision
of services when state and location of apps change
• Level of virtualization sky rocketed leaving all others to catch up such as
cabling, switching, routing, addressing, and L4-L7 services
Virtualization L4 through L7
• Generic services such as Linux and HAProxy (a free load-balancing
tool for web-based apps)
• Commercial services from vendors such as Cisco and Riverbed
• By using Nexus switch as TOR/EOR switch, cisco delivers Cloud
Network Services (CNS) very close to the traffic source
• Virtual appliances running at VMs can use the same tool as VM
applications does for servers
• L4-L7 services become another set of virtual appliance that can scale
up and down with the apps
• Can be turned off and on over generic HW, for example a load
balancer when needed to reduce HW cost and simplify operations

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy