SDN 4

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 42

UNIT IV NETWORK FUNCTION VIRTUALIZATION

Network Virtualization - Virtual LANs – OpenFlow VLAN Support - NFV


Concepts – Benefits and Requirements – Reference Architecture.
Network Virtualization
Computers are being used to effect far-reaching improvements in
communication systems, while on the other, communication systems are
being used to increase and extend the utility of computers.
They enable the user to construct and manage networks independent of
the underlying physical network and with assurance of isolation from
other virtual networks using the same physical network.
They enable network providers to efficiently use network resources to
support a wide range of user requirements.

Network Virtualization
In SDN, network virtualization involves the creation of multiple virtual networks
or segments on top of a shared physical network infrastructure. Each virtual
network operates independently, with its policies, addressing, and routing,
making it ideal for scenarios where isolation and segmentation are required.
Network virtualization offers several benefits, such as:
 Isolation: Different virtual networks can be isolated from each other,
enhancing security and privacy.
 Scalability: Virtual networks can be easily added or removed,
providing scalability to meet changing demands.
 Optimized Resource Utilization: Physical resources are efficiently
used, as multiple virtual networks share the same infrastructure.
 Service Chaining: Different services can be applied to specific virtual
networks as needed.
Server and Storage Virtualization
While SDN primarily focuses on network virtualization, the broader concept of
virtualization also extends to server and storage components. By virtualizing
servers and storage, organizations can build a complete virtualized data center,
where all infrastructure resources are abstracted and dynamically allocated based
on application needs.
The combination of network, server, and storage virtualization enables a fully
virtualized environment that is agile, adaptable, and cost-effective.
Key Components of Virtualization in SDN
1. SDN Controller
The SDN controller is the central intelligence of the SDN architecture. It acts as the
brain of the network, responsible for making decisions about network policies,
routing, and traffic management. The controller communicates with network
devices, such as switches and routers, to enforce these policies.
Common SDN controllers include OpenDaylight, ONOS, and Ryu. These controllers
are highly programmable and provide open APIs for communication with the
network devices.
2. SDN Switches and Routers
In SDN, the network devices, such as switches and routers, are responsible for
forwarding traffic based on instructions from the SDN controller. These devices
support OpenFlow, a standard communication protocol used between the
controller and the network devices.
3. Virtual Network Functions (VNFs)
Virtual Network Functions are software-based instances of network services that
can be deployed in virtualized environments. VNFs can include firewalls, load
balancers, and intrusion detection systems. They are essential for providing
services to virtual networks.
4. Hypervisors
Hypervisors are responsible for creating and managing virtual machines (VMs) on
physical servers. They play a crucial role in server virtualization, enabling
multiple VMs to run on a single physical server.
5. Network Overlays
Network overlays are logical networks created on top of the physical network
infrastructure. These overlays facilitate network virtualization by allowing
multiple virtual networks to coexist on the same physical network.
6. APIs and Protocols
Open APIs and protocols, such as OpenFlow, NETCONF, and REST APIs, are used
for communication between the SDN controller, network devices, and virtualized
network functions.
Benefits of Virtualization in SDN
Virtualization in SDN offers a wide range of benefits, making it a powerful tool for
network administrators and organizations. Here are some of the key advantages:
1. Flexibility and Adaptability
One of the primary benefits of virtualization in SDN is the flexibility it provides.
Network administrators can easily adapt to changing network requirements by
creating or modifying virtual network instances. This adaptability is crucial in
dynamic environments where workloads and applications are constantly
evolving.
2. Resource Optimization
Virtualization allows for efficient resource utilization. By abstracting network
resources, organizations can make the most of their physical infrastructure. This
resource optimization leads to cost savings and improved overall network
performance.
3. Isolation and Segmentation
Network virtualization ensures isolation and segmentation. Different virtual
networks can coexist on the same physical infrastructure, each with its policies
and configurations. This is particularly valuable for multi-tenant environments
and scenarios where security and privacy are paramount.
4. Service Chaining
Service chaining is simplified through virtualization. Different virtualized
network functions, such as firewalls, load balancers, and content filters, can be
easily applied to specific virtual network instances as needed. This allows for the
creation of custom service chains tailored to the requirements of individual
applications.
5. Scalability
Virtualization enables scalability by allowing organizations to create additional
virtual network instances as required. Whether accommodating new applications
or expanding to new geographic locations, virtualization ensures that network
resources can scale to meet demand.
6. Centralized Management
SDN’s centralized control plane, combined with virtualization, provides a single
point of management for the entire network. This simplifies network
administration, reduces complexity, and enhances visibility and control.
7. Cost Savings
Virtualization leads to cost savings in several ways. By optimizing resource
utilization and reducing the need for dedicated physical hardware, organizations
can lower their capital and operational expenses. Additionally, virtualized
environments are more energy-efficient, contributing to long-term cost
reductions.

Virtual LANs
The LAN switch is a store-and-forward packet-forwarding device used to
interconnect a number of end systems to form a LAN segment. The switch can
forward a media access control (MAC) frame from a source-attached device to a
destination-attached device. It can also broadcast a frame from a source-attached
device to all other attached devices. Multiples switches can be interconnected so
that multiple LAN segments form a larger LAN. A LAN switch can also connect to
a transmission link or a router or other network device to provide connectivity to
the Internet or other WANs.
A LAN Configuration
Traditionally, a LAN switch operated exclusively at the MAC level. Contemporary
LAN switches generally provide greater functionality, including multilayer
awareness (Layers 3, 4, application), quality of service (QoS) support, and
trunking for wide-area networking.
The three lower groups might correspond to different departments, which are
physically separated, and the upper group could correspond to a centralized
server farm that is used by all the departments.
Consider the transmission of a single MAC frame from workstation X. Suppose
the destination MAC address in the frame is workstation Y. This frame is
transmitted from X to the local switch, which then directs the frame along the
link to Y. If X transmits a frame addressed to Z or W, its local switch forwards the
MAC frame through the appropriate switches to the intended destination. All
these are examples of unicast addressing, in which the destination address in
the MAC frame designates a unique destination. A MAC frame may also contain
a broadcast address, in which case the destination MAC address indicates that
all devices on the LAN should receive a copy of the frame. Thus, if X transmits a
frame with a broadcast destination address, all the devices on all the
switches receive a copy of the frame. The total collection of devices that receive
broadcast frames from each other is referred to as a broadcast domain.
One simple approach to improving efficiency is to physically partition the LAN
into separate broadcast domains. We now have four separate LANs connected by
a router. In this case, a broadcast frame from X is transmitted only to the other
devices directly connected to the same switch as X. An IP packet from X intended
for Z is handled as follows. The IP layer at X determines that the next hop to the
destination is via router V. This information is handed down to X’s MAC layer,
which prepares a MAC frame with a destination MAC address of router V. When
V receives the frame, it strips off the MAC header, determines the destination,
and encapsulates the IP packet in a MAC frame with a destination MAC address of
Z. This frame is then sent to the appropriate Ethernet switch for delivery.

A Partitioned LAN
The drawback to this approach is that the traffic pattern may not correspond to
the physical distribution of devices. Further, as the networks expand, more
routers are needed to separate users into broadcast domains and provide
connectivity among broadcast domains. Routers introduce more latency than
switches because the router must process more of the packet to determine
destinations and route the data to the appropriate end node.
The Use of Virtual LANs
A more effective alternative is the creation of VLANs. In essence, a virtual local-
area network (VLAN) is a logical subgroup within a LAN that is created by
software rather than by physically moving and separating devices. It combines
user stations and network devices into a single broadcast domain regardless of
the physical LAN segment they are attached to and allows traffic to flow more
efficiently within populations of mutual interest. The VLAN logic is implemented
in LAN switches and functions at the MAC layer. Because the objective is to isolate
traffic within the VLAN, a router is required to link from one VLAN to another.
Routers can be implemented as separate devices, so that traffic from one VLAN to
another is directed to a router, or the router logic can be implemented as part of
the LAN switch.
A VLAN Configuration
VLANs enable any organization to be physically dispersed throughout the
company while maintaining its group identity. For example, accounting
personnel can be located on the shop floor, in the research and development
center, in the cash disbursement office, and in the corporate offices, while all
members reside on the same virtual network, sharing traffic only with each
other.
A transmission from workstation X to server Z is within the same VLAN, so it is
efficiently switched at the MAC level. A broadcast MAC frame from X is
transmitted to all devices in all portions of the same VLAN. But a transmission
from X to printer Y goes from one VLAN to another. Accordingly, router logic at
the IP level is required to move the IP packet from X to Y. The logic integrated
into the switch, so that the switch determines whether the incoming MAC frame
is destined for another device on the same VLAN. If not, the switch routes the
enclosed IP packet at the IP level.
Defining VLANs
A VLAN is a broadcast domain consisting of a group of end stations, perhaps on
multiple physical LAN segments, that are not constrained by their physical
location and can communicate as if they were on a common LAN. Some means is
therefore needed for defining VLAN membership. A number of different
approaches have been used for defining membership, including the following:
Membership by port group: Each switch in the LAN configuration contains
two types of ports: a trunk port, which connects two switches; and an end port,
which connects the switch to an end system. A VLAN can be defined by assigning
each end port to a specific VLAN. This approach has the advantage that it is
relatively easy to configure. The principle disadvantage is that the network
manager must reconfigure VLAN membership when an end system moves from
one port to another.
Membership by MAC address: Because MAC layer addresses are hardwired
into the workstation’s network interface card (NIC), VLANs based on MAC
addresses enable network managers to move a workstation to a different physical
location on the network and have that workstation automatically retain its VLAN
membership. The main problem with this method is that VLAN membership must
be assigned initially. In networks with thousands of users, this is no easy task.
Also, in environments where notebook PCs are used, the MAC address is
associated with the docking station and not with the notebook PC. Consequently,
when a notebook PC is moved to a different docking station, its VLAN
membership must be reconfigured.
Membership based on protocol information: VLAN membership can be
assigned based on IP address, transport protocol information, or even
higher-layer protocol information. This is a quite flexible approach, but it
does require switches to examine portions of the MAC frame above the
MAC layer, which may have a performance impact.

Communicating VLAN Membership


Switches must have a way of understanding VLAN membership (that is, which
stations belong to which VLAN) when network traffic arrives from other
switches; otherwise, VLANs would be limited to a single switch. One possibility is
to configure the information manually or with some type of network
management signaling protocol, so that switches can associate incoming frames
with the appropriate VLAN.
A more common approach is frame tagging, in which a header is typically
inserted into each frame on interswitch trunks to uniquely identify to which
VLAN a particular MAC-layer frame belongs.
IEEE 802.1Q VLAN Standard
A VLAN is not limited to one switch but can span multiple interconnected
switches. In that case, traffic between switches must indicate VLAN membership.
This is accomplished in 802.1Q by inserting a tag with a VLAN identifier (VID)
with a value in the range from 1 to 4094. Each VLAN in a LAN configuration is
assigned a globally unique VID. By assigning the same VID to end systems on
many switches, one or more VLAN broadcast domains can be extended across a
large network.
The position and content of the 802.1 tag, referred to as Tag Control Information
(TCI). The presence of the two-octet TCI field is indicated by inserting a
Length/Type field in the 802.3 MAC frame with a value of 8100 hex. The TCI
consists of three subfields, as described in the list that follows.

Tagged IEEE 802.3 MAC Frame Format

User priority (3 bits): The priority level for this frame.


Canonical format indicator (1 bit): Is always set to 0 for Ethernet switches. CFI
is used for compatibility between Ethernet type networks and Token Ring type
networks. If a frame received at an Ethernet port has a CFI set to 1, that frame
should not be forwarded as it is to an untagged port.
VLAN identifier (12 bits): The identification of the VLAN. Of the 4096 possible
VIDs, a VID of 0 is used to identify that the TCI contains only a priority value, and
4095 (0xFFF) is reserved, so the maximum possible number of VLAN
configurations is 4094.
A LAN configuration that includes three switches that implement 802.1Q and one
“legacy” switch that does not. In this case, all the end systems of the legacy device
must belong to the same VLAN. The MAC frames that traverse trunks between
VLAN-aware switches include the 802.1Q TCI tag. This tag is stripped off before a
frame is forwarded to a legacy switch. For end systems connected to a VLAN-
aware switch, the MAC frame may or may not include the TCI tag, depending on
the implementation. The important point is that the TCI tag is used between
VLAN-aware switches so that appropriate routing and frame handling can be
performed.

A VLAN Configuration with 802.1Q and Legacy Switches


Nested VLANs
The original 802.1Q specification allowed for a single VLAN tag field to be
inserted into an Ethernet MAC frame. More recent versions of the standard allow
for the insertion of two VLAN tag fields, allowing the definition of multiple sub-
VLANs within a single VLAN. This additional flexibility might be useful in some
complex configurations.
One possible approach is for the customer’s VLANs to be visible to the service
provider. In that case, the service provider could support a total of only 4094
VLANs for all its customers. Instead, the service provider inserts a second VLAN
tag into Ethernet frames. For example, consider two customers with multiple
sites, both of which use the same SPN. Customer A has configured VLANs 1 to 100
at their sites, and similarly Customer B has configured VLANs 1 to 50 at their
sites. The tagged data frames belonging to the customers must be kept separate
while they traverse the service provider’s network. The customer’s data frame
can be identified and kept separate by associating another VLAN for that
customer’s traffic. This results in the tagged customer data frame being tagged
again with a VLAN tag, when it traverses the SPN. The additional tag is removed
at the edge of the SPN when the data enters the customer’s network again. Packed
VLAN tagging is known as VLAN stacking or as Q-in-Q.
Use of Stacked VLAN Tags
OpenFlow VLAN Support
A traditional 802.1Q VLAN requires that the network switches have a complete
knowledge of the VLAN mapping. This knowledge may be manually configured or
acquired automatically. Another drawback is related to the choice of one of three
ways of defining group membership (port group, MAC address, protocol
information). The network administrator must evaluate the trade-offs according
to the type of network they wish to deploy and choose one of the possible
approaches. It would be difficult to deploy a more flexible definition of a VLAN or
even a custom definition (for example, use a combination of IP addresses and
ports) with traditional networking devices. Reconfiguring VLANs is also a
daunting task for administrators: Multiple switches and routers have to be
reconfigured whenever VMs are relocated.
SDN, and in particular OpenFlow, allows for much more flexible management
and control of VLANs. It should be clear how OpenFlow can set up flow table
entries for forwarding based on one or both VLAN tags, and how tags can be
added, modified, and removed.
Virtual Private Networks
A VPN is a private network that is configured within a public network (a carrier’s
network or the Internet) to take advantage of the economies of scale and
management facilities of large networks. VPNs are widely used by enterprises to
create WANs that span large geographic areas, to provide site-to-site connections
to branch offices, and to allow mobile users to dial up their company LANs. From
the point of view of the provider, the public network facility is shared by many
customers, with the traffic of each customer segregated from other traffic. Traffic
designated as VPN traffic can only go from a VPN source to a destination in the
same VPN. It is often the case that encryption and authentication facilities are
provided for the VPN. VPNs over the Internet or some other public network can
be used to interconnect sites, providing a cost savings over the use of a private
network and offloading the WAN management task to the public network
provider. That same public network provides an access path for telecommuters
and other mobile employees to log on to corporate systems from remote sites.
IPsec VPNs
Use of a shared network, such as the Internet or a public carrier network, as part
of an enterprise network architecture exposes corporate traffic to eavesdropping
and provides an entry point for unauthorized users. To counter this problem,
IPsec can be used to construct VPNs. The principal feature of IPsec that enables it
to support these varied applications is that it can encrypt/authenticate traffic at
the IP level. Therefore, all distributed applications, including remote logon,
client/server, e-mail, file transfer, web access, and so on, can be secured.
The packet format for an IPsec option known as tunnel mode. Tunnel mode
makes use of the combined authentication/encryption function IPsec called
Encapsulating Security Payload (ESP), and a key exchange function.

An IPsec VPN Scenario


An organization maintains LANs at dispersed locations. Nonsecure IP traffic is
conducted on each LAN. For traffic offsite, through some sort of private or public
WAN, IPsec protocols are used. These protocols operate in networking devices,
such as a router or firewall, that connect each LAN to the outside world. The IPsec
networking device will typically encrypt all traffic going into the WAN, and
decrypt and authenticate traffic coming from the WAN; these operations are
transparent to workstations and servers on the LAN. Secure transmission is also
possible with individual users who connect to the WAN. Such user workstations
must implement the IPsec protocols to provide security.
Using IPsec to construct a VPN has the following benefits:
When IPsec is implemented in a firewall or router, it provides strong security
that can be applied to all traffic crossing the perimeter. Traffic within a company
or workgroup does not incur the overhead of security-related processing.
IPsec in a firewall is resistant to bypass if all traffic from the outside must use IP
and the firewall is the only means of entrance from the Internet into the
organization.
IPsec is below the transport layer (TCP, UDP) and so is transparent to
applications. There is no need to change software on a user or server system
when IPsec is implemented in the firewall or router. Even if IPsec is implemented
in end systems, upper-layer software, including applications, is not affected.
IPsec can be transparent to end users. There is no need to train users on
security mechanisms, issue keying material on a per-user basis, or revoke keying
material when users leave the organization.
IPsec can provide security for individual users if needed. This is useful for
offsite workers and for setting up a secure virtual subnetwork within an
organization for sensitive applications.
MPLS VPNs
An alternative, and popular, means of constructing VPNs is using MPLS. This
discussion begins with a brief summary of MPLS, followed by a an overview of
two of the most common approaches to VPN implementation using MPLS: the
Layer 2 VPN (L2VPN) and the Layer 3 VPN (L3VPN).
MPLS Overview
Multiprotocol Label Switching (MPLS) is a set of Internet Engineering Task Force
(IETF) specifications for including routing and traffic engineering information in
packets. MPLS comprises a number of interrelated protocols, which can be
referred to as the MPLS protocol suite. It can be used in IP networks but also in
other types of packet-switching networks. MPLS is used to ensure that all packets
in a particular flow take the same route over a backbone. Deployed by many
telecommunication companies and service providers, MPLS delivers the QoS
required to support real-time voice and video as well as service level agreements
(SLAs) that guarantee bandwidth.
In essence, MPLS is an efficient technique for forwarding and routing packets.
MPLS was designed with IP networks in mind, but the technology can be used
without IP to construct a network with any link-level protocol. In an ordinary
packet-switching network, packet switches must examine various fields within
the packet header to determine destination, route, QoS, and any traffic
management functions (such as discard or delay) that may be supported.
Similarly, in an IP-based network, routers examine a number of fields in the IP
header to determine these functions. In an MPLS network, a fixed-length label
encapsulates an IP packet or a data link frame. The MPLS label contains all the
information needed by an MPLS-enabled router to perform routing, delivery,
QoS, and traffic management functions. Unlike IP, MPLS is connection oriented.
An MPLS network or internet consists of a set of nodes, called label-switching
routers (LSRs) capable of switching and routing packets on the basis of a label
appended to each packet. Labels define a flow of packets between two endpoints
or, in the case of multicast, between a source endpoint and a multicast group of
destination endpoints. For each distinct flow, called a forwarding equivalence
class (FEC), a specific path through the network of LSRs is defined, called a label-
switched path (LSP). In essence, an FEC represents a group of packets that share
the same transport requirements. All packets in an FEC receive the same
treatment en route to the destination. These packets follow the same path and
receive the same QoS treatment at each hop. In contrast to forwarding in
ordinary IP networks, the assignment of a particular packet to a particular FEC is
done just once, when the packet enters the network of MPLS routers.
The list that follows, based on RFC 4026, Provider Provisioned Virtual Private
Network Terminology, defines key VPN terms used in the following discussion:
Attachment circuit (AC): In a Layer 2 VPN, the CE is attached to PE via an AC.
The AC may be a physical or logical link.
Customer edge (CE): A device or set of devices on the customer premises that
attaches to a provider-provisioned VPN.
Layer 2 VPN (L2VPN): An L2VPN interconnects sets of hosts and routers based
on Layer 2 addresses.
Layer 3 VPN (L3VPN): An L3VPN interconnects sets of hosts and routers based
on Layer 3 addresses.
Packet-switched network (PSN): A network through which the tunnels
supporting the VPN services are set up.
Provider edge (PE): A device or set of devices at the edge of the provider
network with the functionality that is needed to interface with the customer.
Tunnel: Connectivity through a PSN that is used to send traffic across the
network from one PE to another. The tunnel provides a means to transport
packets from one PE to another. Separation of one customer’s traffic from
another customer’s traffic is done based on tunnel multiplexers
Tunnel multiplexer: An entity that is sent with the packets traversing the
tunnel to make it possible to decide which instance of a service a packet belongs
to and from which sender it was received. In an MPLS network, the tunnel
multiplexor is formatted as an MPLS label.
Virtual channel (VC): A VC is transported within a tunnel and identified by its
tunnel multiplexer. In an MPLS-enabled IP network, a VC label is an MPLS label
used to identify traffic within a tunnel that belongs to a particular VPN; that is,
the VC label is the tunnel multiplexer in networks that use MPLS labels.
Virtual private network (VPN): A generic term that covers the use of public or
private networks to create groups of users that are separated from other network
users and that may communicate among them as if they were on a private
network.
Layer 2 MPLS VPN
With a Layer 2 MPLS VPN, there is mutual transparency between the customer
network and the provider network. In effect, the customer requests a mesh of
unicast LSPs among customer switches that attach to the provider network. Each
LSP is viewed as a Layer 2 circuit by the customer. In an L2VPN, the provider’s
equipment forwards customer data based on information in the Layer 2 headers,
such as an Ethernet MAC address.
Customers connect to the provider by means of a Layer 2 device, such as an
Ethernet switch; the customer device that connects to the MPLS network is
generally referred to as a customer edge (CE) device. The MPLS edge router is
referred to as a provider edge (PE) device. The link between the CE and the PE
operates at the link layer (for example, Ethernet), and is referred to as an
attachment circuit (AC). The MPLS network then sets up an LSP that acts as a
tunnel between two edge routers (that is, two PEs) that attach to two networks of
the same enterprise. This tunnel can carry multiple virtual channels (VCs) using
label stacking. In a manner very similar to VLAN stacking, the use of multiple
MPLS labels enables the nesting of VCs.
MPLS Layer 2 VPN Concepts
When a link-layer frame arrives at the PE from the CE, the PE creates an MPLS
packet. The PE pushes a label that corresponds to the VC assigned to this frame.
Then the PE pushes a second label onto the label stack for this packet that
corresponds to the tunnel between the source and destination PE for this VC. The
packet is then routed across the LSP associated with this tunnel, using the top
label for label switched routing. At the destination edge, the destination PE pops
the tunnel label and examines the VC label. This tells the PE how to construct a
link-layer frame to deliver the payload across to the destination CE.
If the payload of the MPLS packet is an Ethernet frame, the destination PE needs
to be able to infer from the VC label the outgoing interface, and perhaps the VLAN
identifier. This process is unidirectional, and will be repeated independently for
bidirectional operation.
The VCs in the tunnel can all belong to a single enterprise, or it is possible for a
single tunnel to manage VCs from multiple enterprises. In any case, from the
point of view of the customer, a VC is a dedicated link-layer point-to-point
channel. If multiple VCs connect a PE to a CE, this is logically the multiplexing of
multiple link-layer channels between the customer and the provider.
Layer 3 MPLS VPN
Whereas L2VPNs are constructed based on link-level addresses (for example,
MAC addresses), L3VPNs are based on VPN routes between CEs based on IP
addresses. As with an L2VPN, an MPLS-based L3VPN typically uses a stack of two
labels. The inner label identifies a specific VPN instance; the outer label identifies
a tunnel or route through the MPLS provider network. The tunnel label is
associated with an LSP and is used for label swapping and forwarding. At the
egress PE, the tunnel label is stripped off, and the VPN label is used to direct the
packet to the proper CE and to the proper logical flow at that CE.
For an L3VPN, the CE implements IP and is thus a router. The CE routers
advertise their networks to the provider. The provider network can then use an
enhanced version of Border Gateway Protocol (BGP) to establish VPNs between
CEs. Inside the provider network, MPLS tools are used to establish routes between
edge PEs supporting a VPN. Thus, the provider’s routers participate in the
customer’s L3 routing function.
Network Virtualization
This section looks at the important area of network virtualization. One immediate
difficulty is that this term is defined differently in a number of academic and
industry publications. So we begin by defining some terms, based on definitions
in ITU-T Y.3011 (Framework of Network Virtualization for Future Networks,
January 2012):
Physical resource: In the context of networking, physical resources include the
following: network devices, such as routers, switches, and firewalls; and
communication links, including wire and wireless. Hosts such as cloud servers
may also be considered as physical network resources.
Logical resource: An independently manageable partition of a physical
resource, which inherits the same characteristics as the physical resource and
whose capability is bound to the capability of the physical resource. An example
is a named partition of disk memory.
Virtual resource: An abstraction of a physical or logical resource, which may
have different characteristics from the physical or logical resource and whose
capability may be not bound to the capability of the physical or logical resource.
As examples, virtual machines (VMs) may be moved dynamically, VPN topologies
can be altered dynamically, and access control restrictions may be imposed on a
resource.
Virtual network: A network composed of multiple virtual resources (that is, a
collection of virtual nodes and virtual links) that is logically isolated from other
virtual networks. Y.3011 refers to a virtual network as a logically isolated
network partition (LINP).
Network virtualization (NV): A technology that enables the creation of
logically isolated virtual networks over shared physical networks so
that heterogeneous collections of multiple virtual networks can simultaneously
coexist over the shared physical networks. This includes the aggregation of
multiple resources in a provider and appearing as a single resource.
NV is a far broader concept than VPNs, which only provide traffic isolation, or
VLANs, which provide a basic form of topology management. NV implies full
administrative control for customizing virtual networks both in terms of the
physical resources used and the functionalities provided by the virtual networks.
The virtual network presents an abstracted network view whose virtual
resources provide users with services similar to those provided by physical
networks. Because the virtual resources are software defined, the manager or
administrator of a virtual network potentially has a great deal of flexibility in
altering topologies, moving resources, and changing the properties and service of
various resources. In addition, virtual network users can include not only users
of services or applications but also service providers. For example, a cloud
service provider can quickly add new services or expanded coverage by leasing
virtual networks as needed.
A Simplified Example
To get some feel for the concepts involved in network virtualization, adapted
from the ebook Software Defined Networking—A Definitive Guide [KUMA13],
shows a network consisting of three servers and five switches. One server is a
trusted platform with a secure operating system that hosts firewall software. All
the servers run a hypervisor (virtual machine monitor) enabling them to support
multiple VMs. The resources for one enterprise (Enterprise 1) are hosted across
the servers and consist of three VMs (VM1a, VM1b, and VM1c) on physical server
1, two VMs (VM1d and VM1e) on physical server 2, and firewall 1 on physical
server 3. The virtual switches are used to set up any desired connectivity between
the VMs across the servers through the physical switches. The physical switches
provide the connectivity between the physical servers. Each enterprise network
is layered as a separate virtual network on top of the physical network. Thus, the
virtual network for Enterprise 1 by a dashed circle and labeled VN1. The labeled
circle VN2 indicates another virtual network.

Simple Network with Virtual Machines Assigned to Different Administrative


Groups
At the bottom are the physical resources, managed across one or more
administrative domains. The servers are logically partitioned to support multiple
VMs. This includes, at least, a partitioning of memory, but may also include a
partitioning of the pool of I/O and communications ports and even of the
processors or cores of the server. There is then an abstraction function that maps
these physical and logical resources into virtual resources. This type of
abstraction could be enabled by SDN and NFV functionality, and is managed by
software at the virtual resource level.

Levels of Abstraction for Network Virtualization


Another abstraction function is used to create network views organized as
distinct virtual networks. Each virtual network is managed by a separate virtual
network management function.
Because resources are defined in software, network virtualization provides a
great deal of flexibility, as this example suggests. The manager of virtual network
1 may specify certain QoS requirements for traffic between VMs attached to
switch 1 and VMs attached to switch 2, and may specify firewall rules for traffic
external to the virtual network. These specification must ultimately be translated
into forwarding rules configured on the physical switches and filtering rules on
the physical firewall. Because it is all done in software and without the need for
the virtual network manager to understand the physical topology and physical
suite of servers, changes are easily implemented.
Network Virtualization Architecture
An excellent overview of the many elements that contribute to an NV
environment is provided by the conceptual architecture defined in Y.3011. The
architecture depicts NV as consisting of four levels:

Conceptual Architecture of Network Virtualization (Y.3011)


Physical resources
Virtual resources
Virtual networks
Services
A single physical resource can be shared among multiple virtual resources. In
turn, each LINP (virtual network) consists of multiple virtual resources and
provides a set of services to users.
Various management and control functions are performed at each level, not
necessarily by the same provider. There are management functions associated
with each physical network and its associated resources. A virtual resource
manager (VRM) manages a pool of virtual resources created from the physical
resources. A VRM interacts with physical network managers (PNMs) to obtain
resource commitments. The VRM constructs LINPs, and an LINP manager is
allocated to each LINP.
Physical resource management manages physical resources and may create
multiple logical resources that have the same characteristics as physical
resources. Physical and logical resources are available to the virtual resource
management at the interface between physical and virtual layers. The virtual
resource management abstracts from the physical and logical resources to create
virtual resources. It can also construct a virtual resource that combines other
virtual resources. Virtual network management can build VNs on multiple virtual
resources that are provided by the virtual resource management. Once a VN is
created, the VN management starts to manage its own VN.

FIGURE 9.12 Network Virtualization Resource Hierarchical Model


Benefits of Network Virtualization
A 2014 survey [SDNC14] by SDxCentral of 220 organizations, including network
service providers, small and medium-size businesses (SMB), large enterprises,
and cloud service providers, reported the following benefits of NV (see Figure
9.13):

FIGURE 9.13 Reported Benefits of Network Virtualization


Flexibility: NV enables the network to be quickly moved, provisioned, and
scaled to meet the ever-changing needs of virtualized compute and storage
infrastructures.
Operational cost savings: Virtualization of the infrastructure streamlines the
operational processes and equipment used to manage the network. Similarly,
base software can be unified and more easily supported, with a single unified
infrastructure to manage services. This unified infrastructure also allows for
automation and orchestration within and between different services and
components. From a single set of management components, administrators can
coordinate resource availability and automate the procedures necessary to make
services available, reducing the need for human operators to manage the process
and reducing the potential for error.
Agility: Modifications to the network’s topology or how traffic is handled can be
tried in different ways, without needing to modify the existing physical networks.
Scalability: A virtual network can be rapidly scaled to respond to shifting
demands by adding or removing physical resources from the pool of available
resources.
Capital cost savings: A virtualized deployment can reduce the number of
devices needed, providing capital as well as operational costs savings.
Rapid service provisioning/time to market: Physical resources can be
allocated to virtual networks on demand, so that within an enterprise resources
can be quickly shifted as demand by different users or applications changes.
From a user perspective, resources can be acquired and released to minimize
utilization demand on the system. New services require minimal training and can
be deployed with minimal disruption to the network infrastructure.
Equipment consolidation: NV enables the more efficient use of network
resources, thus allowing for consolidating equipment purchases to fewer, more
off-the-shelf products.
OpenDaylight’s Virtual Tenant Network
Virtual Tenant Network (VTN) is an OpenDaylight (ODL) plug-in developed by
NEC. It provides multitenant virtual networks on an SDN, using VLAN technology.
The VTN abstraction functionality enables users to design and deploy a virtual
network without knowing the physical network topology or bandwidth
restrictions. VTN allows the users to define the network with a look and feel of a
conventional L2/L3 (LAN switch/IP router) network. Once the network is designed
on VTN, it is automatically mapped onto the underlying physical network, and
then configured on the individual switches leveraging the SDN control protocol.
VTN Manager: An ODL controller plug-in that interacts with other modules to
implement the components of the VTN model. It also provides a REST interface to
configure VTN components in the controller.
VTN Coordinator: An external application that provides a REST interface to
users for VTN virtualization. It interacts with VTN Manager plug-in to implement
the user configuration. It is also capable of multiple controller orchestration.
A virtual network is constructed using virtual nodes (vBridge, vRouter) and
virtual interfaces and links. It is possible, by connecting the virtual interfaces
made on virtual nodes via virtual links, to configure a network that has L2 and L3
transfer function.

Virtual Tenant Network Elements


VRT is defined as the vRouter, and BR1 and BR2 are defined as vBridges.
Interfaces of the vRouter and vBridges are connected using vLinks. Once a user of
VTN Manager has defined a virtual network, the VTN Coordinator maps physical
network resources to the constructed virtual network. Mapping identifies which
virtual network each packet transmitted or received by an OpenFlow switch
belongs to, as well as which interface in the OpenFlow switch transmits or
receives that packet. There are two mapping methods:
Port mapping: This mapping method is used to map a physical port as an
interface of virtual node (vBridge/vTerminal). Port-map is enabled when the
network topology is known in advance.
VLAN mapping: This mapping method is used to map VLAN ID of VLAN tag in
incoming Layer 2 frame with the vBridge. This mapping is used when the
affiliated network and its VLAN tag are known. Whenever this mapping method
is used, it is possible to reduce the number of commands to be set.
VTN Mapping Example
An interface of BR1 is mapped to a port on OpenFlow switch SW1. Packets
received from that SW1 port are regarded as those from the corresponding
interface of BR1. The interface if1 of vBridge (BR1) is mapped to the port GBE0/1
of switch1 using port-map. Packets received or transmitted by GBE0/1 of switch1
are considered as those from or to the interface if1 of vBridge. vBridge BR2 is
mapped to VLAN 200 using vlan-map. Packets having the VLAN ID of 200
received or transmitted by the port of any switch in the network are mapped to
the vBridge BR2.
VTN provides the capability to define and manage traffic flows across a virtual
network. As with OpenFlow, flows are defined based on the value of various
fields in packets. A flow can be defined using one or a combination of the
following fields:
Source MAC Address
Destination MAC Address
Ethernet Type
VLAN Priority
Source IP Address
Destination IP Address
IP Version
Differentiated Services Codepoint (DSCP)
TCP/UDP Source Port
TCP/UDP Destination Port
ICMP Type
ICMP Code

Virtual Tenant Flow Filter Actions


The VTN Manager is part of the OpenDaylight controller and uses base network
service functions to learn the topology and statistics of the underlying network. A
user or application creates virtual networks and specifies network behavior to
the VTN Coordinator across a web or REST interface. The VTN Coordinator
translates these commands into detailed instructions to the VTN Manager, which
in turn uses OpenFlow to map virtual networks to the physical network
infrastructure.
OpenDaylight VTN Architecture
.
Software-Defined Infrastructure
Recent years have seen explosive growth in the complexity of data centers, cloud
computing facilities, and network infrastructures for enterprises and carriers. An
emerging design philosophy to address the challenges of this complexity is
software-defined infrastructure (SDI). With SDI, a data center or network
infrastructure can autoconfigure itself at run time based on application/business
requirements and operator constraints. Automation in SDIs enables
infrastructure operators to achieve higher conformance to SLAs, avoid
overprovisioning, and automate security and other network-related functions.
Another key characteristic of SDI is that it is highly application driven.
Applications tend to change much more slowly than the ecosystem (hardware,
system software, networks) that supports them. Individuals and enterprises stay
with chosen applications for long periods of time, whereas they replace the
hardware and other infrastructure elements at a fast pace. So, providers are at an
advantage if the entire infrastructure is software defined and thus able to cope
with rapid changes in infrastructure technology.
SDN and NFV are the key enabling technologies for SDI. SDN provides network
control systems with the flexibility to steer and provision network resources
dynamically. NFV virtualizes network functions as prepackaged software services
that are easily deployable in a cloud or network infrastructure environment. So
instead of hard-coding a service deployment and its network services, these can
now be dynamically provisioned; traffic is then steered through the software
services, significantly increasing the agility with which these are provisioned.
Although SDN and NFV are necessary components of an SDI, they do not by
themselves provide the intelligence that can generate or recommend the required
configuration that can then be automatically implemented. Therefore, we can
think of SDN and NFV as providing a platform for deploying SDI-enabling
software.
A recent paper by Pott [POTT14] lists the following as some of the key features of
an SDI offering:
Distributed storage resources with fully inline data deduplication and
compression.
Fully automated and integrated backups that are application aware, with
autoconfiguring and autotesting. This new generation will be as close to “zero
touch” as is possible.
Fully automated and integrated disaster recovery that is application aware,
with autoconfiguring and autotesting. This new generation will be as close to
“zero touch” as is possible.
Fully integrated hybrid cloud computing, with resources in the public cloud
consumed as easily as local. The ability to move between multiple cloud
providers, based on cost, data sovereignty requirements, or latency/locality
needs. The providers that want to win the hybrid cloud portion of the exercise
will build in awareness of privacy and security and allow administrators to easily
select not only geolocal providers, but those known to have zero foreign legal
attack surface, and they will clearly differentiate between them.
WAN optimization technology.
A hypervisor or hypervisor/container hybrid running on the metal.
Management software to allow administrators to manage the hardware and the
hypervisor.
Adaptive monitoring software that will detect new applications and operating
systems and automatically monitor them properly. Adaptive monitoring will not
require manual configuration.
Predictive analytics software that will determine when resources will exceed
capacity, when hardware is likely to fail, or when licensing can no longer be
worked around.
Automation and load maximization software that will make sure the hardware
and software components are used to their maximum capacity, given the existing
hardware and existing licensing bounds.
Orchestration software that will not only spin up groups of applications on
demand or as needed, but will provide an “App Store”-like experience for
selecting new workloads and getting them up and running on your local
infrastructure in just a couple of clicks.
Autobursting, as an adjunct of orchestration, will intelligently decide between
hot-adding capacity to legacy workloads (CPU, RAM, and so on) or spinning up
new instances of modern burstable applications to handle load. It would, of
course, scale them back down when possible.
Hybrid identity services that work across private infrastructure and public
cloud spaces. They will not only manage identity but also provide complete user
experience management solutions that work anywhere.
Complete software-defined networking stack, including Layer 2 extension
between data centers as well as the public and private cloud. This means that
spinning up a workload will automatically configure networking, firewalls,
intrusion detection, application layer gateways, mirroring, load balancing,
content distribution network registration, certificates, and so forth.
Chaos creation in the form of randomized automated testing for failure of
all nonlegacy workloads and infrastructure elements to ensure that the
network still meets requirements.

Software-Defined Storage
As mentioned, SDN and NFV are key elements of SDI. A third, equally important
element is the emerging technology known as software-defined storage (SDS). SDS
is a framework for managing a variety of storage systems in the data center that
are traditionally not unified. SDS provides the ability to manage these storage
assets to meet specific SLAs and to support a variety of applications. The
dominant physical architecture for SDS is based on distributed storage, with
storage devices distributed across a network.
Physical storage consists of a number of magnetic and solid-state disk arrays,
possibly from multiple vendors. Separate from this physical storage plane is a
unified set of control software. This must include adaptation logic that can
interface with a variety of vendor equipment and controlling and monitoring that
equipment. On top of this adaptation layer are a number of basic storage services.
An application interface provides an abstracted view of data storage so that
applications need not be concerned with the location, attributes, or capacity of
individual storage systems. There is also an administrative interface to enable the
SDS administrator to manage the distributed storage suite.
Software-Defined Storage Architecture
SDS puts the emphasis on storage services instead of storage hardware. By
decoupling the storage control software from the hardware, a storage resource
can be used more efficiently and its administration simplified. For example, a
storage administrator can use SLAs when deciding how to provision storage
without needing to consider specific hardware attributes. In essence, resources
are aggregated into storage pools assigned to users. Data services are applied to
meet user or application requirements, and service levels are maintained. When
additional resources are needed by an application, the storage control software
automatically adds the resources. Conversely, resources are freed up when not in
use. The storage control software automatically removes failed components and
systems that fail.
SDI Architecture
A number of companies, including IBM, Cisco, Intel, and HP, either have
produced or are working on SDI offerings. There is no standardized specification
for SDI, and there are numerous differences in the different initiatives.
Nevertheless, the overall SDI architecture is quite similar among the different
efforts. A typical example is the SDI architecture defined by Intel. This
architecture is organized into three layers, as illustrated in Figure 9.17 and
described in the list that follows.
Intel’s 3-Layer SDI Model
Orchestration: A policy engine that allows higher level frameworks to manage
composition dynamically without interrupting ongoing operations.
Composition: A low-level layer of system software that continually and
automatically manages the pool of hardware resources.
Hardware pool: An abstracted pool of modular hardware resources.
The orchestration layer drives the architecture. This layer is concerned with
efficient configuration or resources while at the same time meeting application
service requirements. Intel’s initial focus appears to be on cloud providers, but
other application areas, such as big data and other data center applications, lend
themselves to the SDI approach. This layer continually monitors status data,
enabling it to solve service issues faster and to continually optimize hardware
resource assignment.
The composition layer is a control layer that manages VMs, storage, and network
assets. In this architecture, the VM is seen as a dynamic federation of compute,
storage, and network resources assembled to run an application instance.
Although current VM technology provides a level of flexibility and cost savings
over the use of nonvirtualized servers, there is still considerable inefficiency.
Suppliers tend to size systems to meet the maximum demand that a VM might
impose and hence overprovision so as to guarantee service. With software-
defined allocation of resources, more flexibility is available in creating,
provisioning, managing, moving, and retiring VMs. Similarly, SDS provides the
opportunity to use storage more efficiently.
Composition enables the logical disaggregation of compute, network, and storage
resources, so that each VM provides exactly what an application needs.
Supporting this at the level of the hardware is Intel’s rack scale architecture
(RSA). RSA exploits extremely high data rate optical connection components to
redesign the way computer rack systems are implemented. In an RSA design, the
speed of the silicon interconnects means that individual components (processors,
memory, storage, and network) no longer need to reside in the same box.
Individual racks can be dedicated to each of the component classes and scaled to
meet the demands of the data center.
The resource pool consists of storage, network, and compute resources. From a
hardware perspective, these can be deployed in an RSA. From a control
perspective, SDS, SDN, and NFV technologies enable the management of these
resources with an overall SDI framework.

Intel’s SDI Architecture


NFV Concepts
NFV is a significant departure from traditional approaches to the design,
deployment, and management of networking services. NFV decouples network
functions, such as Network Address Translation (NAT), firewalling, intrusion
detection, Domain Name Service (DNS), and caching, from proprietary hardware
appliances so that they can run in software on VMs. NFV builds on standard VM
technologies, extending their use into the networking domain.
Virtual machine technology, enables migration of dedicated application and
database servers to commercial off-the-shelf (COTS) x86 servers. The same
technology can be applied to network-based devices, including the following:
Network function devices: Such as switches, routers, network access points,
customer premises equipment (CPE), and deep packet inspectors (for deep
packet inspection).
Network-related compute devices: Such as firewalls, intrusion detection
systems, and network management systems.
Network-attached storage: File and database servers attached to the network.
In traditional networks, all devices are deployed on proprietary/closed platforms.
All network elements are enclosed boxes, and hardware cannot be shared. Each
device requires additional hardware for increased capacity, but this hardware is
idle when the system is running below capacity. With NFV, however, network
elements are independent applications that are flexibly deployed on a unified
platform comprising standard servers, storage devices, and switches. In this way,
software and hardware are decoupled, and capacity for each application is
increased or decreased by adding or reducing virtual resources.
Vision for Network Functions Visualization
By broad consensus, the Network Functions Virtualization Industry Standards
Group (ISG NFV), created as part of the European Telecommunications Standards
Institute (ETSI), has the lead and indeed almost the sole role in creating NFV
standards.
NFV ISG

ISG NFV Specifications


NFV Terminology
Simple Example of the Use of NFV
This section considers a simple example from the NFV Architectural Framework
document. A physical realization of a network service. At a top level, the network
service consists of endpoints connected by a forwarding graph of network
functional blocks, called network functions (NFs). Examples of NFs are firewalls,
load balancers, and wireless network access points. In the Architectural
Framework, NFs are viewed as distinct physical nodes. The endpoints are beyond
the scope of the NFV specifications and include all customer-owned devices. So, in
the figure, endpoint A could be a smartphone and endpoint B a content delivery
network (CDN) server.

A Simple NFV Configuration Example


The interconnections among the NFs and endpoints are depicted by dashed lines,
representing logical links. These logical links are supported by physical paths
through infrastructure networks (wired or wireless).
VNF-1 provides network access for endpoint A, and VNF-2 provides network
access for B. The figure also depicts the case of a nested VNF forwarding graph
(VNF-FG-2) constructed from other VNFs (that is, VNF-2A, VNF-2B and VNF-2C).
All of these VNFs run as VMs on physical machines, called points of presence
(PoPs). This configuration illustrates several important points. First, VNF-FG-2
consists of three VNFs even though ultimately all the traffic transiting VNF-FG-2 is
between VNF-1 and VNF-3. The reason for this is that three separate and distinct
network functions are being performed. For example, it may be that some traffic
flows need to be subjected to a traffic policing or shaping function, which could
be performed by VNF-2C. So, some flows would be routed through VNF-2C, while
others would bypass this network function.
A second observation is that two of the VMs in VNF-FG-2 are hosted on the same
physical machine. Because these two VMs perform different functions, they need
to be distinct at the virtual resource level but can be supported by the same
physical machine. But this is not required, and a network management function
may at some point decide to migrate one of the VMs to another physical machine,
for reasons of performance. This movement is transparent at the virtual resource
level.
NFV Principles
The VNFs are the building blocks used to create end-to-end network services.
Three key NFV principles are involved in creating practical network services:
Service chaining: VNFs are modular and each VNF provides limited
functionality on its own. For a given traffic flow within a given application, the
service provider steers the flow through multiple VNFs to achieve the desired
network functionality. This is referred to as service chaining.
Management and orchestration (MANO): This involves deploying and
managing the lifecycle of VNF instances. Examples include VNF instance creation,
VNF service chaining, monitoring, relocation, shutdown, and billing. MANO also
manages the NFV infrastructure elements.
Distributed architecture: A VNF may be made up of one or more VNF
components (VNFC), each of which implements a subset of the VNF’s
functionality. Each VNFC may be deployed in one or multiple instances. These
instances may be deployed on separate, distributed hosts to provide scalability
and redundancy.
High-Level NFV Framework
This framework supports the implementation of network functions as software-
only VNFs. We use this to provide an overview of the NFV architecture.
High-Level NFV Framework
The NFV framework consists of three domains of operation:
Virtualized network functions: The collection of VNFs, implemented in
software, that run over the NFVI.
NFV infrastructure (NFVI): The NFVI performs a virtualization function on the
three main categories of devices in the network service environment: computer
devices, storage devices, and network devices.
NFV management and orchestration: Encompasses the orchestration and
lifecycle management of physical/software resources that support the
infrastructure virtualization, and the lifecycle management of VNFs. NFV
management and orchestration focuses on all virtualization-specific management
tasks necessary in the NFV framework.
The ISG NFV Architectural Framework document specifies that in the
deployment, operation, management and orchestration of VNFs, two types of
relations between VNFs are supported:
VNF forwarding graph (VNF FG): Covers the case where network connectivity
between VNFs is specified, such as a chain of VNFs on the path to a web server
tier (for example, firewall, network address translator, load balancer).
VNF set: Covers the case where the connectivity between VNFs is not specified,
such as a web server pool.
NFV Benefits and Requirements
Having considered on overview of NFV concepts, we can now summarize key
benefits of NFV and requirements for successful implementation.
NFV Benefits
If NFV is implemented efficiently and effectively, it can provide a number of
benefits compared to traditional networking approaches. The following are the
most important potential benefits:
Reduced CapEx, by using commodity servers and switches, consolidating
equipment, exploiting economies of scale, and supporting pay-as-you grow
models to eliminate wasteful overprovisioning. This is perhaps the main driver
for NFV.
Reduced OpEx, in terms of power consumption and space usage, by using
commodity servers and switches, consolidating equipment, and exploiting
economies of scale, and reduced network management and control expenses.
Reduced CapeX and OpEx are perhaps the main drivers for NFV.
The ability to innovate and roll out services quickly, reducing the time to deploy
new networking services to support changing business requirements, seize new
market opportunities, and improve return on investment of new services. Also
lowers the risks associated with rolling out new services, allowing providers to
easily trial and evolve services to determine what best meets the needs of
customers.
Ease of interoperability because of standardized and open interfaces.
Use of a single platform for different applications, users and tenants. This
allows network operators to share resources across services and across different
customer bases.
Provided agility and flexibility, by quickly scaling up or down services to
address changing demands.
Targeted service introduction based on geography or customer sets is possible.
Services can be rapidly scaled up/down as required.
A wide variety of ecosystems and encourages openness. It opens the virtual
appliance market to pure software entrants, small players and academia,
encouraging more innovation to bring new services and new revenue streams
quickly at much lower risk.
NFV Requirements
To deliver these benefits, NFV must be designed and implemented to meet a
number of requirements and technical challenges, including the following
[ISGN12]:
Portability/interoperability: The capability to load and execute VNFs provided
by different vendors on a variety of standardized hardware platforms. The
challenge is to define a unified interface that clearly decouples the software
instances from the underlying hardware, as represented by VMs and their
hypervisors.
Performance trade-off: Because the NFV approach is based on industry
standard hardware (that is, avoiding any proprietary hardware such as
acceleration engines), a probable decrease in performance has to be taken into
account. The challenge is how to keep the performance degradation as small as
possible by using appropriate hypervisors and modern software technologies, so
that the effects on latency, throughput, and processing overhead are minimized.
Migration and coexistence with respect to legacy equipment: The NFV
architecture must support a migration path from today’s proprietary physical
network appliance-based solutions to more open standards-based virtual
network appliance solutions. In other words, NFV must work in a hybrid network
composed of classical physical network appliances and virtual network
appliances. Virtual appliances must therefore use existing northbound Interfaces
(for management and control) and interwork with physical appliances
implementing the same functions.
Management and orchestration: A consistent management and orchestration
architecture is required. NFV presents an opportunity, through the flexibility
afforded by software network appliances operating in an open and standardized
infrastructure, to rapidly align management and orchestration northbound
interfaces to well defined standards and abstract specifications.
Automation: NFV will scale only if all the functions can be automated.
Automation of process is paramount to success.
Security and resilience: The security, resilience, and availability of their
networks should not be impaired when VNFs are introduced.
Network stability: Ensuring stability of the network is not impacted when
managing and orchestrating a large number of virtual appliances between
different hardware vendors and hypervisors. This is particularly important
when, for example, virtual functions are relocated, or during reconfiguration
events (for example, because of hardware and software failures) or because of
cyber-attack.
Simplicity: Ensuring that virtualized network platforms will be simpler to
operate than those that exist today. A significant focus for network operators is
simplification of the plethora of complex network platforms and support systems
that have evolved over decades of network technology evolution, while
maintaining continuity to support important revenue generating services.
Integration: Network operators need to be able to “mix and match” servers
from different vendors, hypervisors from different vendors, and virtual
appliances from different vendors without incurring significant integration costs
and avoiding lock-in. The ecosystem must offer integration services and
maintenance and third-party support; it must be possible to resolve integration
issues between several parties. The ecosystem will require mechanisms to
validate new NFV products.
NFV Reference Architecture

FIGURE 7.8 NFV Reference Architectural Framework


NFV infrastructure (NFVI): Comprises the hardware and software resources
that create the environment in which VNFs are deployed. NFVI virtualizes
physical computing, storage, and networking and places them into resource
pools.
VNF/EMS: The collection of VNFs implemented in software to run on virtual
computing, storage, and networking resources, together with a collection of
element management systems (EMS) that manage the VNFs.
NFV management and orchestration (NFV-MANO): Framework for the
management and orchestration of all resources in the NFV environment. This
includes computing, networking, storage, and VM resources.
OSS/BSS: Operational and business support systems implemented by the VNF
service provider.
It is also useful to view the architecture as consisting of three layers. The NFVI
together with the virtualized infrastructure manager provide and manage the
virtual resource environment and its underlying physical resources. The VNF
layer provides the software implementation of network functions, together with
element management systems and one or more VNF managers. Finally, there is a
management, orchestration, and control layer consisting of OSS/BSS and the NFV
orchestrator.
NFV Management and Orchestration
The NFV management and orchestration facility includes the following functional
blocks:
NFV orchestrator: Responsible for installing and configuring new network
services (NS) and virtual network function (VNF) packages, NS lifecycle
management, global resource management, and validation and authorization of
NFVI resource requests.
VNF manager: Oversees lifecycle management of VNF instances.
Virtualized infrastructure manager: Controls and manages the interaction of
a VNF with computing, storage, and network resources under its authority, in
addition to their virtualization.
Reference Points
The main (named) reference points and execution reference points are shown by
solid lines and are in the scope of NFV. These are potential targets for
standardization. The dashed line reference points are available in present
deployments but might need extensions for handling network function
virtualization. The dotted reference points are not a focus of NFV at present.
The main reference points include the following considerations:
Vi-Ha: Marks interfaces to the physical hardware. A well-defined interface
specification will facilitate for operators sharing physical resources for different
purposes, reassigning resources for different purposes, evolving software and
hardware independently, and obtaining software and hardware component from
different vendors.
Vn-Nf: These interfaces are APIs used by VNFs to execute on the virtual
infrastructure. Application developers, whether migrating existing network
functions or developing new VNFs, require a consistent interface the provides
functionality and the ability to specify performance, reliability, and scalability
requirements.
Nf-Vi: Marks interfaces between the NFVI and the virtualized infrastructure
manager (VIM). This interface can facilitate specification of the capabilities that
the NFVI provides for the VIM. The VIM must be able to manage all the NFVI
virtual resources, including allocation, monitoring of system utilization, and fault
management.
Or-Vnfm: This reference point is used for sending configuration information to
the VNF manager and collecting state information of the VNFs necessary for
network service lifecycle management.
Vi-Vnfm: Used for resource allocation requests by the VNF manager and the
exchange of resource configuration and state information.
Or-Vi: Used for resource allocation requests by the NFV orchestrator and the
exchange of resource configuration and state information.
Os-Ma: Used for interaction between the orchestrator and the OSS/BSS systems.
Ve-Vnfm: Used for requests for VNF lifecycle management and exchange of
configuration and state information.
Se-Ma: Interface between the orchestrator and a data set that provides
information regarding the VNF deployment template, VNF forwarding graph,
service-related information, and NFV infrastructure information models.
Implementation
As with SDN, success for NFV requires standards at appropriate interface
reference points and open source software for commonly used functions. For
several years, ISG NFV is working on standards for the various interfaces and
components of NFV. In September of 2014, the Linux Foundation announced the
Open Platform for NFV (OPNFV) project. OPNFV aims to be a carrier-grade,
integrated platform that introduces new products and services to the industry
more quickly.
Develop an integrated and tested open source platform that can be used to
investigate and demonstrate core NFV functionality.
Secure proactive participation of leading end users to validate that OPNFV
releases address participating operators’ needs.
Influence and contribute to the relevant open source projects that will be
adopted in the OPNFV reference platform.
Establish an open ecosystem for NFV solutions based on open standards and
open source software.
Promote OPNFV as the preferred open reference platform to avoid unnecessary
and costly duplication of effort.
OPNFV and ISG NFV are independent initiatives but it is likely that they will work
closely together to assure that OPNFV implementations remain within the
standardized environment defined by ISG NFV.
The initial scope of OPNFV will be on building NFVI, VIM, and including
application programmable interfaces (APIs) to other NFV elements, which
together form the basic infrastructure required for VNFs and MANO components.
With this platform as a common base, vendors can add value by developing VNF
software packages and associated VNF manager and orchestrator software.
NFV Implementation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy