SDN 4
SDN 4
SDN 4
Network Virtualization
In SDN, network virtualization involves the creation of multiple virtual networks
or segments on top of a shared physical network infrastructure. Each virtual
network operates independently, with its policies, addressing, and routing,
making it ideal for scenarios where isolation and segmentation are required.
Network virtualization offers several benefits, such as:
Isolation: Different virtual networks can be isolated from each other,
enhancing security and privacy.
Scalability: Virtual networks can be easily added or removed,
providing scalability to meet changing demands.
Optimized Resource Utilization: Physical resources are efficiently
used, as multiple virtual networks share the same infrastructure.
Service Chaining: Different services can be applied to specific virtual
networks as needed.
Server and Storage Virtualization
While SDN primarily focuses on network virtualization, the broader concept of
virtualization also extends to server and storage components. By virtualizing
servers and storage, organizations can build a complete virtualized data center,
where all infrastructure resources are abstracted and dynamically allocated based
on application needs.
The combination of network, server, and storage virtualization enables a fully
virtualized environment that is agile, adaptable, and cost-effective.
Key Components of Virtualization in SDN
1. SDN Controller
The SDN controller is the central intelligence of the SDN architecture. It acts as the
brain of the network, responsible for making decisions about network policies,
routing, and traffic management. The controller communicates with network
devices, such as switches and routers, to enforce these policies.
Common SDN controllers include OpenDaylight, ONOS, and Ryu. These controllers
are highly programmable and provide open APIs for communication with the
network devices.
2. SDN Switches and Routers
In SDN, the network devices, such as switches and routers, are responsible for
forwarding traffic based on instructions from the SDN controller. These devices
support OpenFlow, a standard communication protocol used between the
controller and the network devices.
3. Virtual Network Functions (VNFs)
Virtual Network Functions are software-based instances of network services that
can be deployed in virtualized environments. VNFs can include firewalls, load
balancers, and intrusion detection systems. They are essential for providing
services to virtual networks.
4. Hypervisors
Hypervisors are responsible for creating and managing virtual machines (VMs) on
physical servers. They play a crucial role in server virtualization, enabling
multiple VMs to run on a single physical server.
5. Network Overlays
Network overlays are logical networks created on top of the physical network
infrastructure. These overlays facilitate network virtualization by allowing
multiple virtual networks to coexist on the same physical network.
6. APIs and Protocols
Open APIs and protocols, such as OpenFlow, NETCONF, and REST APIs, are used
for communication between the SDN controller, network devices, and virtualized
network functions.
Benefits of Virtualization in SDN
Virtualization in SDN offers a wide range of benefits, making it a powerful tool for
network administrators and organizations. Here are some of the key advantages:
1. Flexibility and Adaptability
One of the primary benefits of virtualization in SDN is the flexibility it provides.
Network administrators can easily adapt to changing network requirements by
creating or modifying virtual network instances. This adaptability is crucial in
dynamic environments where workloads and applications are constantly
evolving.
2. Resource Optimization
Virtualization allows for efficient resource utilization. By abstracting network
resources, organizations can make the most of their physical infrastructure. This
resource optimization leads to cost savings and improved overall network
performance.
3. Isolation and Segmentation
Network virtualization ensures isolation and segmentation. Different virtual
networks can coexist on the same physical infrastructure, each with its policies
and configurations. This is particularly valuable for multi-tenant environments
and scenarios where security and privacy are paramount.
4. Service Chaining
Service chaining is simplified through virtualization. Different virtualized
network functions, such as firewalls, load balancers, and content filters, can be
easily applied to specific virtual network instances as needed. This allows for the
creation of custom service chains tailored to the requirements of individual
applications.
5. Scalability
Virtualization enables scalability by allowing organizations to create additional
virtual network instances as required. Whether accommodating new applications
or expanding to new geographic locations, virtualization ensures that network
resources can scale to meet demand.
6. Centralized Management
SDN’s centralized control plane, combined with virtualization, provides a single
point of management for the entire network. This simplifies network
administration, reduces complexity, and enhances visibility and control.
7. Cost Savings
Virtualization leads to cost savings in several ways. By optimizing resource
utilization and reducing the need for dedicated physical hardware, organizations
can lower their capital and operational expenses. Additionally, virtualized
environments are more energy-efficient, contributing to long-term cost
reductions.
Virtual LANs
The LAN switch is a store-and-forward packet-forwarding device used to
interconnect a number of end systems to form a LAN segment. The switch can
forward a media access control (MAC) frame from a source-attached device to a
destination-attached device. It can also broadcast a frame from a source-attached
device to all other attached devices. Multiples switches can be interconnected so
that multiple LAN segments form a larger LAN. A LAN switch can also connect to
a transmission link or a router or other network device to provide connectivity to
the Internet or other WANs.
A LAN Configuration
Traditionally, a LAN switch operated exclusively at the MAC level. Contemporary
LAN switches generally provide greater functionality, including multilayer
awareness (Layers 3, 4, application), quality of service (QoS) support, and
trunking for wide-area networking.
The three lower groups might correspond to different departments, which are
physically separated, and the upper group could correspond to a centralized
server farm that is used by all the departments.
Consider the transmission of a single MAC frame from workstation X. Suppose
the destination MAC address in the frame is workstation Y. This frame is
transmitted from X to the local switch, which then directs the frame along the
link to Y. If X transmits a frame addressed to Z or W, its local switch forwards the
MAC frame through the appropriate switches to the intended destination. All
these are examples of unicast addressing, in which the destination address in
the MAC frame designates a unique destination. A MAC frame may also contain
a broadcast address, in which case the destination MAC address indicates that
all devices on the LAN should receive a copy of the frame. Thus, if X transmits a
frame with a broadcast destination address, all the devices on all the
switches receive a copy of the frame. The total collection of devices that receive
broadcast frames from each other is referred to as a broadcast domain.
One simple approach to improving efficiency is to physically partition the LAN
into separate broadcast domains. We now have four separate LANs connected by
a router. In this case, a broadcast frame from X is transmitted only to the other
devices directly connected to the same switch as X. An IP packet from X intended
for Z is handled as follows. The IP layer at X determines that the next hop to the
destination is via router V. This information is handed down to X’s MAC layer,
which prepares a MAC frame with a destination MAC address of router V. When
V receives the frame, it strips off the MAC header, determines the destination,
and encapsulates the IP packet in a MAC frame with a destination MAC address of
Z. This frame is then sent to the appropriate Ethernet switch for delivery.
A Partitioned LAN
The drawback to this approach is that the traffic pattern may not correspond to
the physical distribution of devices. Further, as the networks expand, more
routers are needed to separate users into broadcast domains and provide
connectivity among broadcast domains. Routers introduce more latency than
switches because the router must process more of the packet to determine
destinations and route the data to the appropriate end node.
The Use of Virtual LANs
A more effective alternative is the creation of VLANs. In essence, a virtual local-
area network (VLAN) is a logical subgroup within a LAN that is created by
software rather than by physically moving and separating devices. It combines
user stations and network devices into a single broadcast domain regardless of
the physical LAN segment they are attached to and allows traffic to flow more
efficiently within populations of mutual interest. The VLAN logic is implemented
in LAN switches and functions at the MAC layer. Because the objective is to isolate
traffic within the VLAN, a router is required to link from one VLAN to another.
Routers can be implemented as separate devices, so that traffic from one VLAN to
another is directed to a router, or the router logic can be implemented as part of
the LAN switch.
A VLAN Configuration
VLANs enable any organization to be physically dispersed throughout the
company while maintaining its group identity. For example, accounting
personnel can be located on the shop floor, in the research and development
center, in the cash disbursement office, and in the corporate offices, while all
members reside on the same virtual network, sharing traffic only with each
other.
A transmission from workstation X to server Z is within the same VLAN, so it is
efficiently switched at the MAC level. A broadcast MAC frame from X is
transmitted to all devices in all portions of the same VLAN. But a transmission
from X to printer Y goes from one VLAN to another. Accordingly, router logic at
the IP level is required to move the IP packet from X to Y. The logic integrated
into the switch, so that the switch determines whether the incoming MAC frame
is destined for another device on the same VLAN. If not, the switch routes the
enclosed IP packet at the IP level.
Defining VLANs
A VLAN is a broadcast domain consisting of a group of end stations, perhaps on
multiple physical LAN segments, that are not constrained by their physical
location and can communicate as if they were on a common LAN. Some means is
therefore needed for defining VLAN membership. A number of different
approaches have been used for defining membership, including the following:
Membership by port group: Each switch in the LAN configuration contains
two types of ports: a trunk port, which connects two switches; and an end port,
which connects the switch to an end system. A VLAN can be defined by assigning
each end port to a specific VLAN. This approach has the advantage that it is
relatively easy to configure. The principle disadvantage is that the network
manager must reconfigure VLAN membership when an end system moves from
one port to another.
Membership by MAC address: Because MAC layer addresses are hardwired
into the workstation’s network interface card (NIC), VLANs based on MAC
addresses enable network managers to move a workstation to a different physical
location on the network and have that workstation automatically retain its VLAN
membership. The main problem with this method is that VLAN membership must
be assigned initially. In networks with thousands of users, this is no easy task.
Also, in environments where notebook PCs are used, the MAC address is
associated with the docking station and not with the notebook PC. Consequently,
when a notebook PC is moved to a different docking station, its VLAN
membership must be reconfigured.
Membership based on protocol information: VLAN membership can be
assigned based on IP address, transport protocol information, or even
higher-layer protocol information. This is a quite flexible approach, but it
does require switches to examine portions of the MAC frame above the
MAC layer, which may have a performance impact.
Software-Defined Storage
As mentioned, SDN and NFV are key elements of SDI. A third, equally important
element is the emerging technology known as software-defined storage (SDS). SDS
is a framework for managing a variety of storage systems in the data center that
are traditionally not unified. SDS provides the ability to manage these storage
assets to meet specific SLAs and to support a variety of applications. The
dominant physical architecture for SDS is based on distributed storage, with
storage devices distributed across a network.
Physical storage consists of a number of magnetic and solid-state disk arrays,
possibly from multiple vendors. Separate from this physical storage plane is a
unified set of control software. This must include adaptation logic that can
interface with a variety of vendor equipment and controlling and monitoring that
equipment. On top of this adaptation layer are a number of basic storage services.
An application interface provides an abstracted view of data storage so that
applications need not be concerned with the location, attributes, or capacity of
individual storage systems. There is also an administrative interface to enable the
SDS administrator to manage the distributed storage suite.
Software-Defined Storage Architecture
SDS puts the emphasis on storage services instead of storage hardware. By
decoupling the storage control software from the hardware, a storage resource
can be used more efficiently and its administration simplified. For example, a
storage administrator can use SLAs when deciding how to provision storage
without needing to consider specific hardware attributes. In essence, resources
are aggregated into storage pools assigned to users. Data services are applied to
meet user or application requirements, and service levels are maintained. When
additional resources are needed by an application, the storage control software
automatically adds the resources. Conversely, resources are freed up when not in
use. The storage control software automatically removes failed components and
systems that fail.
SDI Architecture
A number of companies, including IBM, Cisco, Intel, and HP, either have
produced or are working on SDI offerings. There is no standardized specification
for SDI, and there are numerous differences in the different initiatives.
Nevertheless, the overall SDI architecture is quite similar among the different
efforts. A typical example is the SDI architecture defined by Intel. This
architecture is organized into three layers, as illustrated in Figure 9.17 and
described in the list that follows.
Intel’s 3-Layer SDI Model
Orchestration: A policy engine that allows higher level frameworks to manage
composition dynamically without interrupting ongoing operations.
Composition: A low-level layer of system software that continually and
automatically manages the pool of hardware resources.
Hardware pool: An abstracted pool of modular hardware resources.
The orchestration layer drives the architecture. This layer is concerned with
efficient configuration or resources while at the same time meeting application
service requirements. Intel’s initial focus appears to be on cloud providers, but
other application areas, such as big data and other data center applications, lend
themselves to the SDI approach. This layer continually monitors status data,
enabling it to solve service issues faster and to continually optimize hardware
resource assignment.
The composition layer is a control layer that manages VMs, storage, and network
assets. In this architecture, the VM is seen as a dynamic federation of compute,
storage, and network resources assembled to run an application instance.
Although current VM technology provides a level of flexibility and cost savings
over the use of nonvirtualized servers, there is still considerable inefficiency.
Suppliers tend to size systems to meet the maximum demand that a VM might
impose and hence overprovision so as to guarantee service. With software-
defined allocation of resources, more flexibility is available in creating,
provisioning, managing, moving, and retiring VMs. Similarly, SDS provides the
opportunity to use storage more efficiently.
Composition enables the logical disaggregation of compute, network, and storage
resources, so that each VM provides exactly what an application needs.
Supporting this at the level of the hardware is Intel’s rack scale architecture
(RSA). RSA exploits extremely high data rate optical connection components to
redesign the way computer rack systems are implemented. In an RSA design, the
speed of the silicon interconnects means that individual components (processors,
memory, storage, and network) no longer need to reside in the same box.
Individual racks can be dedicated to each of the component classes and scaled to
meet the demands of the data center.
The resource pool consists of storage, network, and compute resources. From a
hardware perspective, these can be deployed in an RSA. From a control
perspective, SDS, SDN, and NFV technologies enable the management of these
resources with an overall SDI framework.