Module 7 - Modern Architectures
Module 7 - Modern Architectures
Architectures
Ref: SDN and NFV Simplified by Jim Doherty and Various Internet Resources
Contents to be Covered
The following contents to be covered:
Application
Virtual
Resources
vSphere
x64
Architecture
Physical
Resources
About Virtual Machine Virtual Hardware
Up to
4 TB of RAM
15 Devices
per Adapter
Up to 128 vCPUs
Physical and Virtual Networking
Application
Operating System
Virtual Switch
x64 Architecture
vSphere
x64 Architecture
Physical File Systems and VMFS
• VMware vSphere® VMFS enables a distributed
storage architecture, allowing multiple ESXi hosts to
read or write to the shared storage concurrently.
Physical Architecture Virtual Architecture
Application
1
VM 2 VM
2
VM
3
VM
VM 3
Price: $80, 000 gave approx. 500 VMs $2000 * 500 = $1000,000
1. On-demand Self-service
2. Ubiquitous network access
3. Pay per use (metered use)
4. Rapid elasticity
5. Location-independent resource pooling
Types of Clouds: Software as Service (SaaS)
Users access an application remotely via a web browser, installed
software, or a “thin client”(desktop architectures like Citrix or VMWare
Virtual desktop infrastructure (VDI)). Clients get only limited admin
control
SaaS Advantages:
1. Connecting to an application from anywhere and from multiple devices
2. Not having to manage stored data, software upgrades, or OS
3. Reduction in management resources and technical personnel
Downsides of SaaS:
1. Requirement of Internet connectivity
2. The loss of 100% control of your data. Makes switching from one application
provider to another easy
Types of Clouds: Infrastructure as a Service (IaaS)
• A client rents compute resources to have own software programs
including OS and applications
• A client also controls storage of their data
• Has some control over network functions and security (firewalls)
• Rest of control remains with the provider
• Basically, the client rents a server (or servers) on which it can install
its own programs
• A common model with enterprise-class clients. Smaller organizations
can also get enterprise-class computing that requires a very high
setup cost at a fraction
Types of Clouds: Infrastructure as a Service (IaaS)
IaaS Advantages:
• A significant reduction (or complete elimination) of start up and ongoing IT cost
• Use of multiple OS without losing flexibility to switch among them
IaaS Downsides:
• Security concerns: when using a multitenant cloud for sensitive or regulated data
• Loss of Physical Control of Data
• Lack of visibility into cloud network traffic
Provider maintains the control of the hypervisor and HW
Client gets control of application, middleware, and OS
Types of Clouds: Platform as a Service (PaaS)
• Client is provided a computing platform upon which they can develop
and run their own applications
• Clients lets the provider know what programming tools will be used
• Client maintains control of the computing/development environment
(application and middleware)
• Provider delivers programming libraries and maintains control of OS
and HW.
• The underlying network infrastructure (servers, storage, networking)
is managed by the provider
Types of Clouds: Platform as a Service (PaaS)
Advantages of PaaS:
• No IT team for the development environment
• Just developers to do coding
• Cost effective and easy to port programs or systems to new platforms
Downsides of PaaS:
• Physically having your code treasure outside of your four walls
• Security and secrecy
Cloud Deployment Models
• Private Clouds: Provides 5 attributes in an enterprises data center
Internet Advantages:
• Self-service provisioning
• Elasticity of resources
Enterprise Private Cloud • Rapid and simplified
provisioning
• Secured multitenancy
• Improved use of IT
resources
• Better control of IT budgets
Advantages:
Private Public
Clouds Clouds
Use Cases
Disaster recovery Traffic overflow
Quick provisioning Offsite backup
Data archiving Development / QA / test
Virtual Machine Connectivity in Data Centers
• No longer have one OS
• Applications along with all the memory and storage residing on a
single server
• In virtualized data centers, there are many instances of
OS/applications residing on a single host.
• Applications and software may be on different severs at different
times
• How to connect all the pieces, when they are in a near constant state
of flux?
Networking in Traditional Data Centers
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAKegQI
AhAB&url=http%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fproducts%2Fcollateral%2Fswitches%2Fnexus-7000-series-switches%2Fwhite-
paper-c11-737022.pdf&usg=AOvVaw2UN9lepZOO3pcao9spfaVe
https://blog.westmonroepartners.com/a-beginners-guide-to-understanding-the-leaf-spine-network-topology/
https://blog.mellanox.com/2018/04/why-leaf-spine-networks-taking-off/
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=27&cad=rja&uact=8&ved=2ahUKEwj27cD_xqjkAhXCm-
AKHU8gCoU4FBAWMAZ6BAgFEAI&url=https%3A%2F%2Fconferences.heanet.ie%2F2015%2Ffiles%2F181%2FSean%2520Flack%2520-
%2520Arista%2520-%2520L3%2520leaf%2520spine%2520networks%2520and%2520VXLAN.pdf&usg=AOvVaw18Cpu2bGmQfH0mQnd8ZI21
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=33&cad=rja&uact=8&ved=2ahUKEwizxtiux6jkAhWBMd8KHY7GBaM4HhAWMAJ
6BAgCEAI&url=https%3A%2F%2Fwww.cs.unc.edu%2Fxcms%2Fwpfiles%2F50th-symp%2FMoorthy.pdf&usg=AOvVaw19ILk2axt-dG-Lndiz3rlf
https://tools.ietf.org/html/rfc2992
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAAegQIAB
AC&url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fdam%2Fen%2Fus%2Ftd%2Fdocs%2Fswitches%2Fdatacenter%2Fsw%2Fdesign%2Fvpc_design
%2Fvpc_best_practices_design_guide.pdf&usg=AOvVaw1rUhkwFxu9QsHoEAkHQOND
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAKegQIAh
AC&url=https%3A%2F%2Fwww.ciscolive.com%2Fc%2Fdam%2Fr%2Fciscolive%2Femea%2Fdocs%2F2016%2Fpdf%2FBRKDCT-
2378.pdf&usg=AOvVaw20_YyjWcfyPYJm0I7nfWrY
Three Tiered Architecture
1. Core – Layer 3 (L3) routers providing separation of the pods
2. Aggregation – Layer 2/3 (L2/3) switches which serve as boundaries between the pods.
3. Access – Layer 2 (L2) switches providing loop free pod designs utilizing either spanning
tree protocol or virtual link aggregation (VLAG)
Matches with traditional server
functions which require east-west traffic
within the pod and limited North/South
traffic across pods through the core
network.
The major software defined applications driving this are virtualization and
convergence.
Virtualization requires moving workloads across multiple devices which share
common backend information.
Convergence requires storage traffic between devices on the same network
segment.
This leads to the core network devices being very expensive high-speed links.
Networking in Virtual Data Centers
• N-tiered model is still used in virtualized data centers and clouds, but
due to traffic changes, this model is put together differently:
• The first access switch is now the hypervisor (a virtualized switch)
• Hypervisor supports virtual LANs
• First physical switch now is TOR and EOR to aggregate up to core
switches
• This spine and leaf architecture scales well horizontally (East-West)
without giving up vertical (North-South) performance
A Comparison of three tier and Spine-Leaf Architectures Part 2
Ref:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAAegQIA
BAC&url=https%3A%2F%2Flenovopress.com%2Flp0573.pdf&usg=AOvVaw1GXUHwG0Rw_Nz52pUnOcRU
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjRmqT5xajkAhVnhOAKHQ5XBf8QFjAKegQI
AhAB&url=http%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fproducts%2Fcollateral%2Fswitches%2Fnexus-7000-series-switches%2Fwhite-
paper-c11-737022.pdf&usg=AOvVaw2UN9lepZOO3pcao9spfaVe
https://blog.westmonroepartners.com/a-beginners-guide-to-understanding-the-leaf-spine-network-topology/
https://blog.mellanox.com/2018/04/why-leaf-spine-networks-taking-off/
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=27&cad=rja&uact=8&ved=2ahUKEwj27cD_xqjkAhXCm-
AKHU8gCoU4FBAWMAZ6BAgFEAI&url=https%3A%2F%2Fconferences.heanet.ie%2F2015%2Ffiles%2F181%2FSean%2520Flack%2520-
%2520Arista%2520-%2520L3%2520leaf%2520spine%2520networks%2520and%2520VXLAN.pdf&usg=AOvVaw18Cpu2bGmQfH0mQnd8ZI21
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=33&cad=rja&uact=8&ved=2ahUKEwizxtiux6jkAhWBMd8KHY7GBaM4HhAWMAJ
6BAgCEAI&url=https%3A%2F%2Fwww.cs.unc.edu%2Fxcms%2Fwpfiles%2F50th-symp%2FMoorthy.pdf&usg=AOvVaw19ILk2axt-dG-Lndiz3rlf
https://tools.ietf.org/html/rfc2992
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAAegQIAB
AC&url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fdam%2Fen%2Fus%2Ftd%2Fdocs%2Fswitches%2Fdatacenter%2Fsw%2Fdesign%2Fvpc_design
%2Fvpc_best_practices_design_guide.pdf&usg=AOvVaw1rUhkwFxu9QsHoEAkHQOND
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=2ahUKEwjY07bjyKjkAhXrc98KHUbQB94QFjAKegQIAh
AC&url=https%3A%2F%2Fwww.ciscolive.com%2Fc%2Fdam%2Fr%2Fciscolive%2Femea%2Fdocs%2F2016%2Fpdf%2FBRKDCT-
2378.pdf&usg=AOvVaw20_YyjWcfyPYJm0I7nfWrY
Spine-Leaf Architecture
Larger east-west traffic drives the need for a network architecture with an
expanded flat east-west domain like spine-leaf.
Solutions like VMware NSX, OpenStack and others that distribute workloads to
virtual machines running on many overlay networks running on top of a traditional
underlay (physical) network require mobility across the flatter east-west domain.
In spine-leaf every leaf switch is connected to each of the spine switch in a full-mesh
topology.
The spine-leaf mesh can be implemented using either Layer 2 or 3 technologies
depending on the capabilities available in the networking switches.
Layer 3 spine-leaf(s) require that each link is routed and is normally implemented using
Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) dynamic routing with
equal cost multi-path routing (ECMP).
Layer 2 utilizes a loop-free Ethernet fabric technology such as Transparent
Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB).
Equal-cost multi-path (ECMP)
The ECMP feature allows OSPF to add routes with multiple next-hop addresses
and with equal costs to a given destination in the forwarding information base (FIB)
on the routing switch.
For example, show ip route shows multiple next-hop routers listed for the
same destination network (21.0.9.0/24)
Multiple ECMP next-hop
routes cannot be a
mixture of intra-area,
inter-area, and external
routes.
Shortest Path Bridging (SPB) virtual local area network identifier (VLAN ID) or Shortest
Path Bridging VID (SPBV) provides capability that is backwards compatible with spanning
tree technologies.
SPB (MAC) or (SPBM), (previously known as Provider Backbone Bridge PBB) provides
additional values which capitalize on Provider Backbone Bridge (PBB) capabilities. SPB
(the generic term for both) combines an Ethernet data path (either IEEE 802.1Q in the
case of SPBV, or Provider Backbone Bridges (PBBs) IEEE 802.1ah in the case of SPBM)
with an IS-IS link state control protocol running between Shortest Path bridges
(network-to-network interface (NNI) links). The link state protocol is used to discover
and advertise the network topology and compute shortest path trees (SPT) from all
bridges in the SPT Region.
Spine-Leaf Architecture
The core network is also connected to the
spine with Layer 3 using a dynamic routing
protocol with ECMP.
Redundant connections to each spine
switch are not required but highly
recommended.
This minimizes the risk of overloading the
links on the spine-leaf fabric.
This architecture provides a connection
through the spine with a single hop
between leaf(s) minimizing any latency and
bottle necks.
The spine can be expanded or decreased
depending on the data throughput
required.
Spine-Leaf Another Look
The spine-leaf architecture is optimized for east-west traffic that is required by most
software defined solutions.
All interconnections are used and there is no need for STP to block loops. All east-
west traffic is equidistant, so traffic flow has deterministic latency.
Switch configuration is fixed so that no network changes are required for a dynamic
server environment
Disadvantages of the Spine-leaf Architecture
$$$$ The number of cables and network equipment required to scale the bandwidth since
each leaf must be connected to every spine device. This can lead to more expensive spine
switches with high port counts.
The number of hosts that can be supported can be limited due to spine port counts
restricting the number of leaf switch connections.
Oversubscription of the spine-leaf connections can occur due to a limited number of spine
connections available on the leaf switches (typically 4 to 6). Generally, no more than a 5:1
oversubscription ratio between the leaf and spine is considered acceptable but this is highly
dependent upon the amount of traffic in your environment.
Oversubscription of the links out of the spine-leaf domain to the core should also be
considered. Since this architecture is optimized for east-west traffic as opposed to north-
south, oversubscriptions of 100:1 may be considered acceptable.
Cloud and Data Center Layout and Architecture:
Increasing the Rate of Change
• The rate of change is increasing due to:
- Increasing processing power on the servers
- Increasing # of very fast multicore processes (Moor’s Law) ~more VMs
- Needs faster switches with greater port densities located closer to servers
- I/O functions within virtual environment becomes the chokepoint on
performance rather than processing speed
- Going towards homogenization of switching speed in DC near to 100 Gbps
- VM-aware switches: resulting in flat, virtual aware, blazing fast DC and
Clouds
Drawbacks of Hypervisor vSwitch
• No control plane to speak to and is unable to update the physical switch
to support VM mobility
• To facilitate this, many VLANs to be configured on the server-facing ports
of the physical switch
This causes:
• Unnecessary and indiscriminate flooding, multicasts, and unknown
unicast
• Increased uplink utilization and CPU cycles
• Results in dropped packets
• If DCs have many small BC domains, VLANs configuration becomes an
issue
Virtualized Aware Network Switches
• VM-aware switch can learn the VM network topology via a discovery
protocol to:
interact with vSwitches and build a map of virtualized network
• VM-aware switches provide the visibility to vSwitches that are hidden
from view on network monitoring tools
Allows network administrators:
• To measure and troubleshoot network traffic per VM
• To configure the network parameters of VMs
• Track VMs as they migrate within the network
• To reduces complexity by requiring no additional server software or
changes to hypervisors or VMs
Virtualization: Lower level Details
VMware Product Design Review:
• Early 2000s VMware workstation and GSX freeware (VM Player)
• Since 2001, VMware aimed at the server market with GSX server (type 2) and the ESX
Server (type 1) hypervisor products
• VM hypervisors provide a virtualized set of hardware for a video adapter, a network,
and a disk
• With pass-through drivers to support guest USB, serial, and parallel devices
• VMware VMs are highly portable and run on any HW
• Ability to: - Pause OS running on VM guest
- Move to another physical location
- Resume at the same point where operation was paused before moving
vSphere
• Type-1 bare metal product where VM Server is type-2
• vCenter Server is a part of VSphere family to provide a single point of
control and management for DCs
• The other components of VSphere:
--- Infrastructure Services: to provide abstraction and aggregation, and
allocate HW resources through the components vCompute,
vStorage, and vNetwork services
--- Application Services: high-availability and fault-tolerance services to
provide availability, security, and scalability to apps
--- Clients: IT admin users access vShpere through vSphere client or a
web access through a browser
Components of vSphere
• ESX and ESXi. ESX has a built-in service console. ESXi comes with an installable
or an embedded version.
• Virtual Machine File System (VMFS): High performance cluster FS
• Virtual SMP: allows a single VM to use multiple CPUs simultaneously
• vMotion: enables live migration of running VM
• Distributed Resource Scheduler: allocates and balances computing capacity
across the HW resource for VMs
• Consolidated Backup: agent free backup of VMs
• Fault Tolerance: creation of a secondary copy of a VM. All action on primary also
goes to secondary
• vNetwork Distributed Switch: a distributed virtual switch (DVS) that spans
ESX/ESXi hosts to reduce network maintenance and increase capacity
• Pluggable Storage Array: a storage plug-in framework to provide multipath load-
balancing to enhance storage power
vMotion
• Migrates a running VM from one to another physical resource with
zero down time, continuous service availability, and complete
transaction integrity
• Small caveat: can migrate VMs within the same data center
• vMotion supports moving virtual disks or configuration files of a
powered up VM to a new data store
• A frozen or suspended VM can be moved across DCs, but an active
VM can only be moved within a DC.
• vMotion and Storage vMotion (storage admin function) are two
different functions.
Distributed Power Management (DPM)
Distributed Resource Scheduler (DRS)
VXLAN
• Same as VLANs, but with extension to address common VLAN failings
• Limited VLAN ID was 8 bits with 4096 unique addresses
• IP/MPLS core network hosted Layer 3 VPN services and used other techniques to
alleviate capacity problem
• VXLAN: 24 address bits with 16 million VXLAN IDs (enough for a larger DC)
• VLAN uses STP for loop prevention and undesirably shuts down many ports
• VXLAN tunnels L2 (Ethernet) frames across L3 IP network via encapsulating MAC
in UDP to tunnel through IP
• VXLAN attaches a L3 header to L2 frame and then encapsulate it within a UDP-IP
packet
• Allows massive L2 DC traffic through L3 that is easy to manage efficient data
transfer
VXLAN Tunnel Endpoints
• VXLAN uses VXLAN Tunnel Endpoint (VTEP) devices to:
--- map tenants and end devices to VXLAN segments
--- perform encapsulation/de-capsulation
• Each VTEP has two interfaces:
--- a switch port on local LAN
--- an interface on the transport IP network
• IP interface has a unique IP address to identify VTEP device on transport
IP network (called infrastructure VLAN)
• A VTEP device discovers remote VTEPs for its VXLAN segment and learns
the remote MAC address to VTEP mapping
• VTEPs are used on physical switches to bridge together virtual network
and physical network segments
Ref: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.html
Ref: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.html
Part 3: Network Functions Virtualization
• Server Virtualization through VMs
• Network Virtualization
--- VLAN --- MPLS --- VXLAN
• Network Functions Virtualization
--- Refers to the virtualization of L4 to L7 services such as load balancing
and firewalling
--- NFV came about because of inefficiencies created by virtualization
--- Such as routing of traffic to and from network appliances at the edge of
DC
--- With VMs spinning up and being moved all over, the highly varied traffic
flows becomes problem for fixed appliances
--- NFV creates a virtual instance of functions such as firewall, which can be
spun up and placed where it is needed alike a VM
Virtualizing Appliances
• Application communication is controlled at L4 through L7 of OSI model
• These layers also include information vital to many network functions
such as security, load balancing, and optimization
• Examples of L4 through L7 services include:
data loss prevention systems
firewalls and intrusion detection systems (IDS)
load balancers security events and information management (SEIM)
Secure Socket Layer (SSL) accelerators VPN concentrators
• Challenges to keep up with the ever-increasing speed and flexibility of
DC/Clouds
• To keep up with the wire speed and constantly changing environment
without dropping packets
Ref: https://www.theregister.co.uk/2013/12/09/feature_network_function_virtualisation/
Some L4 to L7 DC tools and Services
• Firewall:
--- Allows authorized traffic and blocks unauthorized traffic.
--- Can be located at the edge of the DC or close to server.
--- Application-aware firewalls in fashion to give net admin a better
visibility and control over security
• SSL Offload:
--- SSL is a web-based encryption tool
--- Provides security of web-based data stream without user intervention
--- SSL offload service provides a termination point for encryption
Virtualizing Appliances
To keep up with the speed of networks:
--- most of the appliances lived at the edge of the network
--- this allows fewer (bigger and faster) appliances to be located close
to WAN links where they are more manageable
--- instead of remaining idle as an expensive resource under
redundancy until there is an emergency, load balancers ensure
max efficiency and traffic performance
Some L4 to L7 DC tools and Services: Load Balancer
• Directs and spreads incoming traffic to DC/Cloud
• Controls the flow of traffic when apps scale up and down
• Unlike old days scalability meaning growing over time
• An architectural consideration now is an automatic function enacting
real-time changes
• Classic design of L4-L7 was for static environment, but now VMs can be
spun up or transferred both from session to session or in a live session.
• For virtualization, L4-L7 services require stateful information about the
apps
• This info need to be shared across the network to ensure the provision
of services when state and location of apps change
• Level of virtualization sky rocketed leaving all others to catch up such as
cabling, switching, routing, addressing, and L4-L7 services
Virtualization L4 through L7
• Generic services such as Linux and HAProxy (a free load-balancing
tool for web-based apps)
• Commercial services from vendors such as Cisco and Riverbed
• By using Nexus switch as TOR/EOR switch, cisco delivers Cloud
Network Services (CNS) very close to the traffic source
• Virtual appliances running at VMs can use the same tool as VM
applications does for servers
• L4-L7 services become another set of virtual appliance that can scale
up and down with the apps
• Can be turned off and on over generic HW, for example a load
balancer when needed to reduce HW cost and simplify operations