vEPC Overview
vEPC Overview
STUDENT BOOK
LZT1381721 R3A
LZT1381721 R3A
Virtual EPC Overview
DISCLAIMER
Ericsson shall have no liability for any error or damage of any kind
resulting from the use of this document.
© Ericsson AB 2017
Table of Contents
1 INTRODUCTION ................................................................................7
1 INTRODUCTION ..............................................................................8
4 SUMMARY .....................................................................................32
1 INTRODUCTION ............................................................................34
4.1 FUEL............................................................................................76
5 SUMMARY .....................................................................................80
3 VEPC IMPLEMENTATION............................................................... 83
1 INTRODUCTION ............................................................................84
2 SUMMARY ...................................................................................113
4 ACRONYMS................................................................................... 115
Intentionally Blank
1 Introduction
Objectives
1 Introduction
Ericsson is industrializing Network Function Virtualization (NFV) for improved
deployment flexibility, built for the most demanding environments. Ericsson
virtual Evolved Packet Core provides tested and validated solutions addressing a
large number of vertical use-cases thereby opening up new operator
opportunities.
Operators are looking for ways to increase capacity, coverage and at the same
time ensure that their businesses KPIs are met. They are demanding a better
environment for innovations to be able to try out and to accelerate the
introduction of new services at a lower total cost of ownership. They want
efficient networks and operations where services can be deployed quickly and
effortlessly while having the tools to deploy and offer new services.
The benefits of Ericsson virtual Evolved Packet Core include all the benefits of
NFV, but with the Ericsson differentiation – a complete end-to-end solution,
meaning virtualization of all Evolved Packet Core components. We support
flexible deployment of a whole range of virtual network services enabled by
vMME, vSGSN, vPGW, vSGW, vGGSN, vPCRF, vDPI, vProbe, vePDG and
vTWAG. We provide full feature compatibility with classical Evolved Packet
Core, maintaining our leadership with best in class compatibility with
surrounding systems from devices and RAN to charging systems and services.
Classical and virtual network nodes will coexist seamlessly in areas such as pool,
geo redundancy and load sharing.
For idle state UEs, the SGW terminates the downlink data path and triggers
paging via the MME when downlink data arrives for the UE. SGW terminates the
interface towards E-UTRAN and MME.
It performs policy enforcement, packet filtering for each user, charging support.
PGW act as the anchor for mobility between 3GPP and non-3GPP technologies
like Wi-Fi. PGW terminates the SGi interface towards the PDN.
1.3 Virtualization
Operators have realized the potential of using new technologies in building and
running their networks. The primary purpose is to reduce costs, and also to
shorten product release cycles to enable quicker roll-out of new services. This has
led to the establishment of Network Functions Virtualization (NFV) and the
formation of the Network Functions Virtualization Industry Specification Group
(NFV ISG) in ETSI.NFV opens for horizontal integration of Network Functions
in operator’s networks. In particular it allows using so called Commercial-off-
the-Shelf (COTS) hardware, where the same hardware can be used across a
multitude of network services. It also allows for large scale and small scale
deployments of those services.
NFV allows a flexible and aligned way to manage many, or possibly all, services
deployed by an operator. Use cases include simple management of initial
deployment, start, stop, scaling and moving applications within its environment,
to adapt to capacity needs and other dynamic aspects of the network.
Certain common architectural guidelines are needed to make NFV possible. This
is what the ETSI NFV ISG group is specifying.
OSApplication
OS OS
Hypervisor
In the Picture above the Hypervisor is the software that creates and runs virtual
machines on the host hardware.
› - Telecom Grade
› - Real-time optimizations
› - Distributed
› - Cloud manager integration
› - SDN & network services
› - Open environment based on Openstack
› - Both Ericsson & 3rd party hardware
1.3.5 NFV
What is NFV?
› Operators like:
AT&T, BT, DT, DoCoMo, FT, Sprint, Verizon
OSS-RC
vEPC as one or ECM
more VNFs –
vSGSN-MME,
vEPG, vSAPC,
vWMG
CEE
The ETSI NFV group has also proposed an NFV Management and Orchestration
(MANO) architecture that addresses the operational lifecycle needs of VNFs as they are
deployed in the cloud. That is the dotted box on the right in the figure. Achieving and
validating interoperability at critical reference points within the MANO-model and
towards the outside is the key focus for the ongoing ETSI work.
Many operators have the requirement to run their Telco applications in the cloud using
VMware.
vEPC as one or
more VNFs –
vSGSN-MME,
vEPG, vSAPC,
vWMG
vSphere
(ESXi) vCenter
Figure 1-10: ETSI NFV Ref Architecture VMware vCloud Product Mapping
vEPC has been validated on VMware in single VNF and vEPC Compact Deployments.
Default format is VMDK for all vEPC VNFs. vEPC has been deployed on VMware using
the OVF files generated by scripts provided by the VNF or by OVF files provided for
Compact Deployment. vEPC has been validated utilizing the following VMware
components (NFV role within parenthesis):
• vCenter (VIM)
• vSphere (NFVI)
- Hypervisor ESXi
- Virtual Distributed Switch, VDS
2 vEPC Network
NBI
NC TP LB DB
Platform
NBI
NBI
NC AP/DP FS SC TP LB DB
LB
EBS
TSP -> CBA
SGSN-MME EPG
SAPC
NBI
CP PP
NM
SSR
LB
SAPC
SGSN-MME
EPG
› Full support for a mix of SGi Roaming OSS Charging BSS CS HSS/HLR AAA
› Statefull redundancy
EBS/MkVII EBS/MkX EBS/MkX COTS COTS
I Native Native Native Virtual Virtual between native and virtual
SGSN-MMEs
• Same feature set & logic for virtualized and non-virtualized EPC
Datacenter
DataCenter vEPC MBB,
vEPC IoT
Enterprise,
Communication
› Same feature set & logic Existing
for virtualized and non- MBB
Service
virtualized EPC
› No impact on surrounding PCRF
GW
vEPG vSAPC
GW
network nodes GW
› 3GPP-compliant
multivendor support
• Availability
• Performance
• NW separation
• Security and Integrity
• Cloud Agnostics
VM
VM
VM
VM
VM
VM
VM
VM
› Cloud Virtualization Virtualization
– No single-point of failure
Host
Host
Host
Host
Host
Host
Host
Host
– Host supervision
– VM migration (not 15A)
Resource Resource
– Affinity & anti-affinity Cluster Cluster
– Automatic networking
• EPC physical application level mechanisms for fast failover between boards/VMs are
kept also in cloud deployments
• Network level redundancy and load sharing can be utilized with a mix of physical and
virtual network functions
• Maintaining existing Stateful failover behavior also in a virtual environment, both intra-
VNF and inter-VNF
Network Level: Geo redundant network level features (like Geo-Redundant Pool) with
inter-site and inter cloud region redundancy; Failures and fast switchovers between sites
(ICR); Load balancing across Sites/cloud regions; Gateway blacklisting; Traffic Storm
Protection; Smart Signaling Throttling and UE signaling Control.
EPC VNF: Intra-VNF redundancy for VM failures; Application aware state replication with
1+1 and NN+1 failover schemas; Software failures and fast switchovers within VNFs. VNF
Resilience and redundancy handling are independent of Hardware failure notifications from
the virtualization and infrastructure layers. Instead, the application components (VMs,
processes, etc.) are supervised by internal VNF logic and recovery actions are triggered
when needed.
Hardware: Server restart and failure detection; Fully redundant Hardware infrastructure; No
single point of failure for compute, networking (e.g. NIC/ ToR/ BGW) or storage; Failures
in Hardware and Software appear as lost VM’s to VNFs
Benefits eNB
Virtually no service impact on end users even
in FULL outage scenarios within a data center
INNOVATION
• When the Serving MME goes down, another MME in the pool
can take over the UE by retrieving the UE context backup from
the Backup MME.
• Re-attach is not needed. The UE’s SIP sessions and other
services are still valid.
• The stored backup of the UE context can be used also at other
failure situations, e.g., S11 link failure and S1 link failure.
• End-2-end feature requiring support in GW
Also possible to divide a pool into” sub-pools” (i.e. a member of one sub-pool
will only replicate to members in the other” sub-pool”)
› Minor impact on GW
– CPU (below 10%)
– Bandwidth (below 70 Mbit/s at 3.7 MSAU) S11
S1
eNB
2G vEPG
Charging
Gb System
Iu SGSN- GW
3G Gn,S11
MME
IP
S1-MME IP PCRF
S1-U
LTE
S2a
GW’
Sync protocol over UDP
WiFi/CDMA
VM VM VM VM VM VM
SRIOV DPDK DPDK DPDK vSwitch DPDK DPDK DPDK
Driver Driver Driver Driver Driver Driver
vSwitch
Hypervisor
Hypervisor
VM VM VM
Driver Driver Driver
vNIC
vNIC
vNIC
vNIC
Hypervisor Hypervisor Hypervisor
GRE
... VLAN
... ...
› One VLAN per VNIC › Multiple VLAN per VNIC › Using GRE over LAN
› Max 8 VLAN per VM › Requires non-standard › Separation is done on
additions to OpenStack application level
› Not supported in CEE 15A › Only GRE/IPv4
› A fair amount of configuration
EPG SGSN-
RAN MME EPG
› Multi-tenancy
MSS, HSS
EPG EPG HLR
CMS 1 CMS 2
VM VM VM VM VM VM
Infrastructure Infrastructure
controller 1 controller 2
Hypervisor 1 Hypervisor 2
Hardware 1 Hardware 2
3 Virtual EPC
The driving forces behind vEPC are the market needs and the benefits of NFV.
NFV benefits:
• Speed to business and efficiency.
Ericsson has a complete Virtual EPC portfolio commercially available since Q4 2014. It
enables an unprecedented scalability and flexibility from small-scale local deployments
to large-scale datacenter deployments. This means that Virtual EPC can be deployed in
large centralized datacenters but also distributed close to the radio network.
Ericsson provides full feature compatibility with classical EPC. This means Ericsson
will maintain the Evolved Packet Core leadership with best in class compatibility with
surrounding systems from devices and RAN to charging systems and services. Classical
and virtual network nodes will coexist seamlessly in areas such as pool, geo redundancy
and load sharing.
› Global services
organization
Building on the market leading EPC applications Ericsson provides smooth, seamless
migration paths with unique features across legacy and virtualized network nodes. The
goal is to support simplified operations and fast time to service for the network services.
Network
Data
Center
Flexibility Pre-integration
Ericsson
Days
vEPC
NFV:
• Over-capacity required.
• Features needs to be available and verified.
• Application (VNF) needs to be available and verified.
• Optimal way of deploying multiple types of application a
multitude of times.
As a summary, vEPC:
4 Summary
Objectives
1 Introduction
OSS-RC
vEPC as ECM
one or
more VNFs
– vSGSN-
MME,
vEPG,
vSAPC,
vWMG
CEE
CEE uses Mirantis OpenStack which implements the core cloud functionality. Employing
the OpenStack framework exposes an already established set of functionalities for these
basic services.
CEE contains the virtualization layer (hypervisors) and virtual switches. It also
provides needed hooks to integrate and orchestrate external physical appliances
such as physical switches, storage arrays, etc.
To manage CEE, the Ericsson Cloud Manager (ECM) provides a high level
orchestration for application- deployment, monitoring and management as well as
for CEE infrastructure planning, fulfillment, assurance and charging. As an
alternative to ECM, the CEE dashboard can be used for low level management
and orchestration.
1.1.1.1 Openstack
OpenStack is a free and open-source software cloud computing software platform. The
technology consists of a series of interrelated projects that control pools of processing,
storage and networking resources throughout a data center - which users manage
through a web-based dashboard, command-line tools, or a RESTful API.
RESTful API: An API that adheres to the principles of REST does not require the
client to know anything about the structure of the API. Rather, the server needs to
provide whatever information the client needs to interact with the service.
OpenStack is an open source project that implements part of the ecosystem that is
specified by the NFV ISG. Other parts that are needed to have a full NFV
environment is included in the open source Linux OS.
When configuring the cloud ecosystem each physical host is given a role. An
important role is the Compute Node which handles the execution of VMs.
• Ephemeral storage
• Tenant isolation
• NTP
• DNS
1.1.1.2 Atlas
Ericsson Cloud Manager enables the creation, orchestration, activation, metering, and
monitoring of services running on virtualized resources, involving elastic IT and
programmable network resources, at consistent levels of quality and security. With
Ericsson Cloud Manager, cloud resources are no longer confined to a single data center,
but rather, are spread throughout the network to help improve both internal operations
and service quality. It manages the creation and instantiation of virtualized network
functions and automates and orchestrates provisioning of the virtual infrastructure.
Ericsson Cloud Manager can interface to any virtualized infrastructure managers (e.g.
hypervisors). It can also interface to physical equipment managers to address specific
scenarios that require direct control of hardware.
With Ericsson Cloud Manager, service providers can gradually turn their networks and
infrastructure into distributed clouds where workload allocation becomes more elastic
and dynamic, without compromising service quality. It lets service providers optimize
any type of resource, virtual and physical, in any combination, and manage them all in a
flexible and proactive way to adapt to new business models.
Operator
› Manages and orchestrates Enterprise, Services Application
VAs, SI & Providers
computing, storage, network Vertical Apps
and applications across data
centers and tenants
› Handles service quality
Ericsson
› Dynamic, model-based External
Business
Cloud
service definition and Logic Manager
Network
provisioning Management
› Enforces end-to-end policies Internet
Cloud providers may use Ericsson Cloud Manager to centrally manage infrastructure
that potentially spans many physical data centers that may be geographically diverse.
1.1.3 OSS
OSS/BSS in the ETSI model does not exist in the vEPC solution but instead
physical OSS-RC is used. OSS-RC interfaces directly towards the O&M interface
of the VNFs. OSS-RC is not depending on any special cloud functionality.
The ETSI VIM and NFVI implemented by CEE are forming the service model
Infrastructure as a Service (IaaS).
vEPC VNFs use anti-affinity rules due to redundancy and capacity reason unless
deployed in non-redundant models for distributed MBB and Small enterprise
instances.
Other means of ruling how the VMs are distributed is host aggregation, which is
defined and used within OpenStack.
The Virtual Appliance (see Host 2 in the Picture below) is typically part of the
infrastructure (NFVI) doing central network or central resource tasks. The
Physical Application is typically the orchestration software.
The hypervisor controls the host processor and resources and allocates what is
needed to each VM operating system, making sure that the VMs cannot disrupt
each other.
Figure 2-7: A generalized logical view of a typical POD with VNF and the CEE
infrastructure
DMTF’s Open Virtualization Format (OVF) standard provides the industry with
a standard packaging format for software solutions based on virtual systems.
OVF has been adopted and published by the International Organization for
Standardization (ISO) as ISO 17203.
If there is no OVA container, the deliverables are typically file container(s) that
include a script for generating an OVA container.
Affinity rules are specified in the OVF file. Related to affinity rules are the rules
that dictates how many vCPUs that are allowed on one physical CPU or core.
Over-provisioning is said to occur if the number of vCPU is larger than the
number of physical CPUs/cores on a certain host. This is not recommended for
performance critical subsystems.
ECM through OpenStack Glance administrates the images that form VMs.
Default format is qcow2 for all vEPC VNFs.
The virtual switch is tightly coupled to the hypervisor, so you cannot change
hypervisor and keep the same virtual switch. Ericsson vEPC is vSwitch agnostic
from functionality point of view but the performance are very dependent on
vSwitch capacity.
1.6 Storage
VIM instances can use two kinds of storage space: ephemeral and persistent.
The data is stored in different locations depending on the storage type: Ephemeral
data is managed by the Compute Node itself. This means that the storage could
be local or remote, depending on the Compute Nodes configuration. Persistent
data however, is normally acquired by requesting it from OpenStack Block
Storage.
Normally, an instance boots on the ephemeral storage and that VM-type may
have assigned additional ephemeral storage as well. What is extremely important
to remember with this type of storage, is that it is tightly bound to the life of that
VM instance which means that if you are decommissioning the instance, the data
is lost.
Life cycle control for all infrastructure services is separated from vEPC.
Domain
Cloud Mgmt
Mgmt
(NFVO)
(NMS)
VM-1 VM-2 VM-n VM-1 VM-2 VM-1 VM-2 VM-3 VM-4 VM-n
Cloud
Infrastructure
(VIM,
NFVI-SW)
Hypervisor Hypervisor Hypervisor Hypervisor
Hypervisor (NFVI-SW) Hypervisor (NFVI-SW) Hypervisor (NFVI-SW)
(NFVI-SW) (NFVI-SW) (NFVI-SW) (NFVI-SW)
The Main nodes functions are Node Management, Control Plane, User Plane and
Load Balancer.
VM VM VM VM VM VM VM VM VM VM
NM NM CP CP LB LB UP UP UP UP
Cloud
Infrastructure
(VIM, NFVI)
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor
Hypervisor (NFV-SW) Hypervisor (NFV-SW)
(NFV-SW) (NFV-SW) (NFV-SW) (NFV-SW) (NFV-SW) (NFV-SW)
Compute Compute Compute Compute Compute Compute
Compute (NFV-HW) Compute (NFV-HW)
(NFV-HW) (NFV-HW) (NFV-HW) (NFV-HW) (NFV-HW) (NFV-HW)
Switch Switch
Border Border
Router Router
External IP networks
The VMs execute on CEE and are connected to OSS and ECM. Each VNF has its
own routing, load balancer, local O&M – which is identical to the physical O&M
of the NF, and storage.
The VNFs are transparent to the interconnection layer of the physical blades (the
VNFs are agnostic of the IaaS, Infrastructure as a Service).
All vEPC VNFs are reachable via virtual IP addresses in the same manner as their
corresponding PNFs.
IP networks are separated in the VNF by different methods depending on the VNF:
- vSAPC and vSGSN-MME uses separate vNICs attached to different VLANs in the
infrastructure. There is a maximum of 8 vNICs per VM. The VNFs sends untagged
traffic and the vSwitch tags the packet with the VLAN that has been allocated by
Neutron.
- vEPG and vWMG separates external traffic by using VLAN trunk vNICs, GRE
Tunnels or MPLS
VM VM VM VM VM
Virtual
EPC
NM LB CP/UP/GP CP/UP/GP …
› Typically 2 VNF internal OS OS OS OS OS
vNIC vNIC vNIC vNIC vNIC
networks and 1 external vSwitch vSwitch vSwitch
Infrastructure
Virtualization
Hypervisor Hypervisor Hypervisor …
Switch
Virtual
Combined
EPC
NM UP GP …
LB/CP
OS OS OS OS OS
vNIC vNIC vNIC vNIC vNIC
Infrastructure
Virtualization
Hypervisor Hypervisor Hypervisor
Switch
PDN
Site
Router
• There is only a single vNIC available per LB VM. The VNFs tags packets
with different tags for different external networks.
• CEE cloud systems include VLAN trunking capabilities. The CEE
infrastructure maps the VNF tagged VLANs to the correct Neutron
virtual network
• In a VMware cloud system, Guest VLAN-tagging is used and this
has to be configured for the port in VDS. The VNF tags are then
switched unmodified in the infrastructure.
o MPLS
Cloud assigned IP address configuration (e.g. DHCP) is not supported. All IP addresses
are statically configured, either by the VNFs themselves or by being present in the OVF
file.
For the EPC-in-a-box deployment the flow is different. From an external source,
only a VIP is known and used to address the VNF. The VIP is not configured on
any particular network interface, but is the IP address used to reach the
application service available on several application VMs. The site router has IP
routes for the VIP addresses where the available LB VMs are next hops. ECMP is
used for load sharing over the available LB VMs. The packet is forwarded on L2
level via the ToR switch, the compute host, the vSwitch, and the vNIC of the LB
VM. The LB VM steers incoming traffic towards one of the application VMs on
which the VIP is available. Steering is done on application protocol identities like
e.g. GTP TEID.VLANs for external communication configured in a cloud region
must also be configured in the site router(s).
VM VM VM VM VM
Virtual
EPC
NM LB CP/UP/GP CP/UP/GP …
OS OS OS OS OS
Site
Router
routes
› LB part of CP/GP VM in vSGSN- VM VM VM VM VM
Virtual
Combined
NM UP GP …
LB/CP
vNIC
OS
vNIC
OS
vNIC
OS
vNIC
OS
vNIC
Site
Router
The LB VM receiving an external ingress packet will have to send the packet out
on the internal network to a specific CP/UP/GP VM. The LB function knows
how to do an even distribution among all suitable VMs on all hosts. Packets
sourced from CP/UP/GP VMs also pass LB VMs when egressing the cloud
region.
For vSAPC and SGSN-MME where the LB is on the same VM as the GP, the
traffic flow can be more efficient. There will be an increased chance that traffic
ends up on the VM with the user session. This will reduce the traffic over the
vSwitch.
vSAPC, VM
› Traffic is separated over different interfaces vMME Driver
and networks
vNIC vNIC
› vSAPC and vSGSN-MME
Hypervisor
̶ Uses separate vNICs attached to different
VLANs in the infrastructure. NIC
̶ There is a maximum of 8 vNICs per VM. Hardware
̶ The VNFs sends untagged traffic and the
vSwitch tags the packet with the VLAN ...
that has been allocated by Neutron.
› vEPG and vWMG
VM
̶ Separates external traffic by using VLAN vEPG
Driver
trunk vNICs. vWMG
vNIC
̶ There is only a single vNIC available per
LB VM. The VNFs tags packets with Hypervisor Multiple
different tags for different external VLANS per
NIC vNIC
networks.
̶ Other separation methods that could be Hardware
used
› GRE/IPv4 tunnels.
...
› MPLS/BGP
BGW
BGW
EPC VNF: Intra VNF redundancy for VM failures. Application aware state
replication with 1+1 and n+1 failover schemas. Software failures and fast
switchovers within VNFs. VNF resilience and redundancy handling are
independent of Hardware failure notifications from the virtualization and
infrastructure layers. Instead, the application components (VMs, processes, etc.)
are supervised by internal VNF logic and recovery actions are triggered when
needed.
2.1.7 O&M
Each VNF instance is managed separately as a Managed Element of its own.
2.1.9 Storage
The VNFs in the vEPC release only uses server- local storage, both ephemeral
and persistent, but no central storage like external SAN storage array.
Software images and scripts are retrieved from the Software gateway. The scripts
are used to generate an OVF descriptor. The reason for using a script is to enable
adapting the OVF to the particular resource needs in a deployment. Parameters
required for initial O&M connectivity are subsequently specified in a personality
file.
vSAPC has a slightly different procedure: Scripts are also provided to 1) create
the OVA package for initial installation including OVF Descriptor file and
personality file for vSAPC customization during deployment and 2) script to
create the OVF descriptor files for scaling the Control Plane (a maximum of 6
CPs in the cluster).
The Software packages for the different VNFs retrieved from Software Gateway
contain scripts that generate a particular OVF for scaling of the VNF. The scripts
are configured and executed in an environment with IP connection to ECM. The
exact procedure differs slightly for the different VNFs.
2.1.10.3 Update/Upgrade
Upgrade of the vEPC VNFs is done in the same way as on physical deployment.
2.1.11 Deliverables
Typical deliverables for each VNF are:
• OVA container, includes both the software image and the OVF
file or scripts that can generate OVF files
• Scripts for generating an OVA container. Can be part of the
OVA container
3 vEPC Deployments
Ericsson Customer
Installing
Run the deployment
tool, enable flexibility to
The flavor
customization, pre-
definition and
configure the
tenant operation
environment, generate
on ECM/CEE
OVA and enable ECEE/CI
C
requires the
services;
assistance from
the
Perform the administrator.
configuration on CIC
and vRP through SSH
VNF
› Box
› Virtualization layer used to enable multiple Box Cloud
applications in single server (box)
› No scalability
› No HW redundancy
› Deployment On-premise (typical)
…
› Cloud
› Group of HW resources (servers, storage and
switches) managed (orchestrated) as one pool.
› Deployment in Large data center (typical).
› Applications typically deployed where there is
room.
› Applications can be scaled
› HW redundancy typically achieved with anti-
affinity rules
Variant Description
Single VNF single server - Single server dedicated to single VNF
- No HW redundancy
- Two VNFs required to accomplish HW
redundancy
- Typically not scalable
Single VNF multiple server - Multiple servers dedicated to single VNF
- HW redundant
- Typically scalable
- Efficient HW use when VNF requires whole
servers for capacity reasons.
Multiple VNF multiple - Multiple VNFs sharing HW
servers - HW redundant
- Typically scalable
- Efficient HW use when VNF only requires
small part of each server.
Currently vertical scaling at
deployment time
Horizontal scaling typically
by creating new VMs on
VNF
new servers
Horizontal scalability (number of VMs)
› Scalable deployments
̶ Targeting data centers
̶ HW redundant
Non-scalable Scalable
› Non-scalable Deployments Deployments Deployments
̶ No HW redundancy
› EPC-in-a-box
Single VNF
̶ Targeting distributed (on-
Single-server vEPC Compact
EPC-in-a-box Multiple
per VNF Deployment
servers
premise) deployments see
see
3/22109 – HSD 101 100/9
4/22109 – HSD 101 100/9
› VNF Single server Single Server Single Server Multiple Servers
+ +
deployments Servers for NFV-I Servers for NFV-I
In the VNF single server deployment each VNF is deployed on a single server. In
this deployment the CEE is limited to use one CIC server, single traffic switch
and single control switch meaning that in this type of deployment there is no
redundancy. VNF single server deployment is usually used when a minimal
footprint is required.
– Data Center
› ECM or Atlas for orchestration
› Separate servers for cloud
infrastructure is needed 1 server 1 server 1 server
Host aggregates function in CEE is used to reserve compute server for vEPG, to
avoid that EPG user plane data traffic can starve out bandwidth resources for
other VNFs.
Figure 2-26: How to Define Bill of Material for vEPC Single Server per VNF
Deployment
The number of servers per CEE, vEPC VNF and ECM need to be calculated to
get the total number of servers needed for scalable deployment.
The number is = x+a+b+c, plus an extra 2 servers if an ECM instance also shall
be deployed. (Refer figure 1-25)
The components of the virtual EPG are deployed as Virtual Machines (VMs) in a
cloud environment. A VM can take the role of a virtual Route-Processor (vRP),
Virtual Forwarder (vFRWD), or a Virtual Service (vSRVC). Figure 2-27 shows
an overview of the EPG in a cloud environment.
VM VM VM VM VM VM VM VM VM
NM NM UP UP LB LB CP CP CP
Cloud
Infrastructure
(VIM, NFVI)
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor
Hypervisors (NFV-SW) Hypervisor Hypervisor
Sw itch Sw itch
Router Router
NM NM CP CP CP
› No overcommit of vCPUs VM VM VM VM
UP UP LB LB
› Up to 1.5 Gbps
› Up to 100 eNodeBs
› Up to 1 RNC VM # Total Total Total
› Validated on CEE Type VMs Mem(G Disk(G vCPU
› Possible to deploy in a Cloud (e.g. a B) B) s
CEE region)* NM 2 16 80 4
CP 3 48 120 18
*An alternative is to use EPC-in-a-box and just
deploy EPG. The EPC-in-a-box has higher UP UP 2 32 80 12
capacity due to usage of CPU pinning. Note
that EPC-in-a-box cannot be deployed in a LB 1 8 40 6
Cloud. Total 8 104 320 40
3.3.4 Storage
3.3.5 Resilience
For the scalable deployment, a physical host failure will lead to loss of three
VMs, which vEPG treats as a multiple failure. The consequence is that the PGW
part of the vEPG restarts and all PGW and combined PGW and SGW sessions
are lost. SGW sessions survive.
Note that all application resilience functions in the vEPG still apply.
3.4.1 Architecture
VM VM VM VM VM VM VM VM VM VM
LB LB CP UP CP UP CP UP CP UP
NM NM FS FS
Cloud SS7 SS7 SS7 SS7
Infrastructure SCTP SCTP SCTP SCTP
(VIM, NFVI)
Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso
Hypervisors (NFV-SW)
r r r r r r r r r
HW (x86) HW (x86) HW (x86) HW Compute HW (x86) HW (x86) HW (x86) HW (x86) HW (x86)
(x86) (NFV-HW)
The virtual SGSN-MME is based on the same architecture as the physical SGSN-
MME, including middleware and application SW. This provides a common
feature set for both physical and virtual SGSN-MME. The common application
Software also assures interworking between virtual SGSN-MME, physical
SGSN-MME, and related peer network elements.
The Software components are distributed and executed on VMs. VMs in the
virtual SGSN-MME correspond to PIUs in the physical SGSN-MME.
VMs can have one of the following roles:
Block
vLC
vLC
NC
NC
FS
FS
storage …
AP/DP
AP/DP
AP/DP
redundant L2 network
.
…
Up to 32 AP/DP/SCTP
instances.
Multi-host virtual This deployment targets scalability and high capacity, with full
SGSN-MME with Software and Hardware redundancy. The vLC role is deployed
Standalone vLC as vLC VMs separate from the GPB VMs.
Multi-host virtual This deployment targets increased scalability and capacity for
SGSN-MME with payload, with full Software and Hardware redundancy. The vLC
Integrated vLC role is integrated in the GPB VMs.
› Compact Deployment
– Sharing servers with EPG
and SAPC
› EPC-in-a-box
VM VM VM VM VM VM
› One VM per server CP UP CP UP CP UP CP UP
LB LB
3.4.4 Storage
All virtual SGSN-MME storage is on the FSB VM root disk, that is, the VM disk
image or the OpenStack Cinder volume that is used to boot the VM. The storage
includes Software data, configuration data, SGSN charging data, and log files.
All non-FSB VMs use only the local VM disk image for booting. Non-FSB VMs
do not write to the local disk; they write to the shared file system provided by the
primary FSB VM.
The virtual SGSN-MME has two FSB VMs acting as file servers; one primary
and one secondary. The primary FSB VM mirrors its disk to the secondary FSB
VM. This means that the virtual SGSN-MME provides storage redundancy when
deployed with local storage. For centralized storage, the cloud system typically
provides back-end storage redundancy. In this case, the FSB redundancy provides
SGSN-MME internal NFS service redundancy.
The virtual SGSN-MME storage capacity is fixed. It is not possible to expand the
delivered storage capacity for the virtual SGSN-MME, by resizing the FSB VM
disk or by adding additional disks. For example, it is not possible to attach an
OpenStack Cinder block storage volume to the FSB VMs.
› vSGSN-MME includes
application level fault
tolerant storage
› Charging data and other
logs are stored redundantly
between FS VMs using
DRBD and are always
available, even if one VM
fails
› VM storage provided
through local disk storage
on each server
› Ephemeral storage
3.4.5 Resilience
The SAPC is positioned as the advanced solution for the following business
drivers:
Multi-access. The SAPC 1 release provides policy control for both Fixed
accesses (like PPP or IPoE) and Mobile accesses (like GSM, UTRAN and E-
UTRAN).
Convergence. Users may enter into the Service Network through different type
of accesses, and policy control can manage consistently such users across such
accesses. With such target, the SAPC 1 provides an FMC policy control solution.
Easy Integration. The SAPC can be easily integrated with any gateway
supporting the standard 3GPP Gx reference point. The SAPC is deployed in
verified in Ericsson end-to-end solutions that requires policy control,
interworking then with the Ericsson Evolved Packet Gateway (EPG), and the
Ericsson Wi-Fi Mobility Gateway (WMG). In addition, the SAPC can be north-
bound integrated with Operation and Maintenance systems through standard
interfaces like Simple Network Management Protocol (SNMP), Network
Configuration Protocol (NETCONF), or Simple Object Access Protocol (SOAP).
Cloud Domain
Management Management
(NFVO) (NMS)
CM
PM
FM
1+1 2-34 2 2
etc
VM VM VM VM VM VM VM VM
VR VR VR VR
NM NM GP GP
O&M O&M Traffic Traffic
Cloud
Infrastructure
(VIM, NFVI)
Hypervisor (NFV-SW)
Compute (NFV-HW)
Switch Switch
Router Router
External IP networks
› Compact Deployment
– Sharing servers with EPG and
SAPC vSGSN/
MME vEPG vSAPC
Servers Server
vSAPC
Server
VM VM VM VM VM
› Two server deployment VR VR CP
NM FS
on one server O&M Traffic
LB
̶ Non-redundant HW
VM VM VM VM VM
̶ Table shows total resources VR VR CP
NM FS
O&M Traffic
LB
Hypervisor
HW (x86)
3.6 vWMG
vWMG is a common mobile network access solution
• for SIM & non-SIM devices
• Supporting trusted and untrusted Wi-Fi networks
vWMG is feature compatible with most of the features of the classical WMG.
3GPP 3GPP
Macro
3GPP Network Gx
Micro
SAPC
HSS
3GPP
Pico ESy
SWx
Gy
OCS/
CCN
AAA
DNS
Gz
Serving
-GW Provisi
IEEE DNS
oning
802.11 S2b
PDN-
GW
L2/L3 Connectivity (Public Internet) S6b
SGi
SWu Wi-Fi
AP WMG
IMS
Wi-Fi Network Diameter SWm
SIGNALLING
PAYLOAD
Gx
3GPP
3GPP
Macro SAPC
3GPP
Micro Network ESy HSS
3GPP
Pico Gy
SWx
OCS/
CCN
S5 Gz
AAA
Provisi
Serving PDN- oning
-GW S2a
GW
IEEE
802.11 Service
SGi/Gi
Network
Wi-Fi
AP 3PP’s own Wi-Fi WMG
AC EoGRE/EoIP
protocol
RADIUS STa
Wi-Fi Network
SIGNALLING
PAYLOAD
VM VM VM VM VM VM VM VM
NFVI
From 1 to n
Servers
1RU Server
From small to large
DC
DC EBS
COTS
VM VM VM VM VM VM
VM VM
Cloud
Infrastructure
NM NM CP UP CP UP
……. CP UP CP UP
LB LB
(VIM, NFVI)
Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso Hyperviso
Hyperviso Hyperviso
Hypervisors (NFV-SW)
r r r rr rr r rr r r
HW (x86) HW (x86) HW (x86) HW Compute HW (x86) HW (x86) HW (x86) HW (x86) HW (x86)
(x86) (NFV-HW
Switch Switch
Router Router
External IP networks
Server Server
Server Server
Server Server
Server Server
4.1 Fuel
Fuel is the CEE Installation Server. Fuel is an open source deployment and
management tool for OpenStack which does the automation and management for
you. Fuel in general, is not part of the cloud infrastructure; it can be run on a
separate server as any physical application.
Fuel makes possible that the Openstack controls all VMs. The fuel command
below displays the resources available in the vEPC, such as the so called CIC
(CEE Installation Controller) and other resources called Compute.
[root@vepc02-fuel ~] # fuel node
id | status | name | cluster | ip | mac | roles | pending roles | online
---|----------|------------------|---------|---------------|-------------------|---------------------------|---------------|-------
4 | ready | compute-0-5 | 1 | 192.168.0.23 | b8:ca:3a:6f:5d:34 | compute | | True
6 | discover | Untitled (83:cc) | None | 192.168.0.129 | b8:ca:3a:6d:83:cc | | | False
2 | ready | compute-0-4 | 1 | 192.168.0.21 | b8:ca:3a:6f:61:84 | compute | | True
1 | ready | compute-0-2 | 1 | 192.168.0.20 | b8:ca:3a:6f:66:14 | compute | | True
3 | ready | cic-0-1 |1 | 192.168.0.22 | b8:ca:3a:6f:66:88 | cinder, controller, mongo | | True
5 | ready | compute-0-3 | 1 | 192.168.0.24 | b8:ca:3a:6f:5d:d8 | compute | | True
4.2 CEE
The CEE provides the virtualization control and a management layer based on
OpenStack. OpenStack is a virtualization management system that controls pools
of compute, storage, and networking resources throughout a Data Center.
When configuring the cloud ecosystem each physical host is given a role. An
important role is the Compute Node which handles the execution of VMs.
Let’s assume that ssh root@10.20.0.3 provides access to CEE (CIC) (see figure
above). Already in CIC the path /var/log includes information about the available
resources in CIC:
root@node-1:~# cd /var/log
root@node-1:/var/log# ls
…glance neutron nova horizon….
Inside CIC there is access to glance commands, neutron commands and nova
commands. Each of those set of commands accomplish specific tasks.
Glance: Glance is the Openstack instance for image storage. ECM, through
OpenStack Glance, administrates the images that form VMs.
The command nova flavor-list displays the flavor list, meaning the hardware
specific for each type of VM. In ECM will be found the same definitions.
The command nova host-list displays the available hardware resources, such as
Compute.
The commands nova-manage VM list, displays the VMs and which host it is
running on.
The commands nova shows VM-14570 shows virtual networks of a specific VM.
The command nova host-describe node-2 checks the compute host status (CPU,
memory and disk usage).
5 Summary
Intentionally Blank
3 vEPC Implementation
Objectives
› Describe the Ericsson's vEPC for Internet of things IoT, meaning the
solution for IoT services which consists of vEPG, vSAPC and
vSGSN-MME VNFs
1 Introduction
Ericsson virtual Evolved Packet Core provides tested and validated solutions
addressing a large number of vertical use-cases. Key initial virtual network
services include:
• Internet of Things
• Distributed Mobile Broadband
• Enterprise
• Communication
• Mobile Broadband
The benefits of Ericsson ‘s virtual EPC include all the benefits of NFV, but with
Ericsson differentiation – a complete end-to-end solution, meaning virtualization
of all EPC components with full feature compatibility with classical EPC.
• EPC-in-a-box
– No Hardware redundancy
– Not scalable (i.e. capacity is limited to what the single server can
handle).
• Cloud
In this type of deployment, the vEPC VNFs are deployed in a Cloud (e.g. CEE,
Openstack, VMware, etc.). Cloud can be defined as a pool of resources (compute,
networking, storage, services, etc.) managed as one entity (aka datacenter) and
where resources can be allocated on-demand for a VNF. In general Cloud
deployments are possible to scale.
Compact In-a-box
(MBB, Communication, IoT) (Enterprise, DMBB)
vEPC
SGSN-MME SAPC
2x12 Host
EPG EVR
RAN_int
Signalling_int
Media_int
O&M_int
Routing
Contexts Signalling_int
RAN O&M_int
Trunk vNIC
Media
Signaling
RAN_ext
O Media_ext
&
Signalling_ext
M
Iac O&M_ext
toExt_open
LCT
NIC
“Back-
haul” eNodeB
eNodeB
eNodeB
• Cloud Deployment
Single VNF
In this deployment the single VNF has dedicated servers, i.e. servers in a cloud
are dedicated to a single VNF. The two variants of Single VNF are:
Both the multiple and single server deployments can be scalable. But for some
VNFs the single server deployment is not scalable and some VNFs do not have
any single server deployment.
NOTE: There is no vEPC single server Cloud deployment with multiple VNFs.
Cloud deployments, where VNFs are sharing servers with other non vEPC
applications in the cloud, are not validated. If sharing servers with other
applications, special care must be taken with regards to allocation of server
resources (CPUs, memory, storage IO bandwidth and network IO bandwidth).
“The Internet of Things is the network of physical objects that contain embedded
technology to communicate and sense or interact with their internal states or the
external environment” M2M (Machine to Machine) was an earlier term used.
Today M2M is seen as providing the connectivity for devices to enable smart
services. M2M and the smart services together form IoT.
So far, the IoT has developed in two main ways. First, miniaturization, cloud
solutions, faster processing speeds, and use of data analytics have allowed
companies to benefit from real-time data collected from the physical
environment. Second, decreasing component costs and cheaper data collection
methods have altered the cost-benefit model, making IoT solutions feasible for
more actors. Together, these drivers lay the foundation for continued
development of new products and services.
VNS IoT provides operators with support for IoT traffic traversing through the
3GPP Packet Core network.
From ARPU >$50 per month … …to ARPU <$5 per month
71 M
40 M 49 M
33 M 25 M
26 M 37 M
15 M 14 M 9M 17 M
Customer Support
Customer Support
CORE NETWORKS
Device & App
VerificationDevice & App
OSS-RC/ENM
Virtual EPC / EPC Unified Data Management Verification
Consulting & SI
Consulting & SI
Managed Services
RADIO NETWORKS
Massive IoT RAN (NB-IoT, Cat-M1, EC-GSM-IoT) Managed Services
1.2.6 Benefits
› Improved cash flow
› Increased sales hit rate increase
revenues
Automated TCO:
• Cost efficient and scalable deployment for low bandwidth
stationary devices.
Optional Value Packages to support more advanced IoT needs.
The DMBB VNS covers a number of aspects of local access, in a small scale
system. Typical is that the operator distributed network is connected to the
operator’s central network via a poor (i.e. limited bandwidth and/or high latency)
backhaul, e.g. satellite or E1 links. The operator distributed network may in its
turn be connected to one or several different types of networks local to the
distributed site.
In DMBB, the vEPG is in the EPC architecture always a combined SGW and
PGW. vSGSN-MME is optional but recommended to distribute (see chapter 3.3).
The other VNFs are optional depending on the implemented use case. ECM is
also optional and Atlas (part of CEE) is the default orchestrator.
1.3.2 Packaging
The same value packages apply for the vEPC VNFs as for EPC PNFs (Physical
Network Functions). Next section lists the value packages, and specific functions
within the packages, that are required to realize the described use cases. In
addition to the use case specific value packages, the DMBB site will in most
cases have the same level of functionality as the central site.
vEPG
vSGSN-
MME
vSAPC
Figure 3-11: vEPC Value Packages
The Software for DMBB is licensed per VNF and licenses entitles for use in one
virtual instance of that VNF.
vThunder does not have value packages. All functions are included and the
licensing is based on throughput.
vThunder
1.3.3.1 SGSN-MME
• VP Signaling Optimization
• VP Network Efficiency
• Base Package
• Base Package
• VP Network Efficiency
EPG
SAPC
• Base Package
1.3.4 Deployment
Figure below describes the high level deployment validated in 16Q4. The Traffic
and Control do not necessarily have to connect to the same BGW.
HDS CRU
EPC-in-a-box
NIC NIC
10G
1G
vFuel
(EPC-in-
NIC
BGW a-box)
Traffic Control
vFuel
NIC
10G
(vThunder)
1G
NIC NIC
Dell 630T
vThunder
The vEPC EPC-in-a-box Solution Description, has the specifics of how EPC-in-a
box is deployed on a single server CEE and the requirements around that. It also
has more information on how vFuel and Atlas can be used which is valid for both
EPC-in-a-box and vThunder.
It is optional if the servers are connected directly to the BGW or via a switch.
vFuel, needed for CEE installation and recovery, is installed on a low-end device,
e.g. a laptop, and it does not have to be a permanent part of the permanent
installation at the DMBB site.
After CEE installation for EPC-in-a-box the vFuel can be removed. Before
removing Fuel some configuration of the Box and backup of vFuel must be done.
This use case enhances the user experience by using local network peering. This
is possible thanks to distributing the vEPG. In this case the mobile operator can
offload traffic from the backhaul by routing traffic directly to locally available
internet access or peering partners. The distributed vEPG can provide the same
services to the user as the centrally located PGW/GGSN.
The vEPG does not need any application specific configuration to achieve this
use case. It is network configuration in the vEPG and the mobile backbone
routers at the distributed site that need configuring. Scenarios for this use case
range from a remote rural site covered by an operator to company locations such
as oil rigs requesting operators to enhance their mobile broadband for local
services.
DMBB’s small footprint offers an attractive MBB introduction for small local
operators requiring MBB. The use case is similar to the LTE Greenfield use case
in VNS MBB but DMBB using EPC-in-a-box provides a price model better
adapted for MBB deployments with a smaller number of subscribers.
The EPC-in-a-box deployment offers a full EPC solution for GSM, WCDMA and
LTE access and 3GPP connectivity to external functions such as HSS/HLR and
voice services.
HSS
vSGSN- OSS/
MME BSS
vSAPC
LTE/3G
Internet
vEPG etc.
Signaling
Ch. Sys Payload
BACKHAUL
DISTR NETWORK
DMBB
Local
Network,
vEPG vThunde
r
W/L
1.3.6 Benefits
• Distributed vSGSN-MME
› Lower signaling and payload (GSM) latency
› Increased resilience against backhaul failures
– vSGSN-MME configuration can further reduce HLR/HSS signaling
› Less configuration in:
– Central SGSN-MME
– Central DNS with distributed DNS
› Re-usable configuration between sites
1) Workforce Mobility is the need to be able to work anywhere and anytime. This
means being able to work from a range of devices and access information needed
to do one’s job. A mobile workforce can be mobile within their own premises, in
the office or on the campus, as well as when travelling (to and from the office and
as part of their role).
The16Q2 scope of VNS Enterprise covers the possibility to outsource the mobile
data network to operators and enable mobility for enterprise employees. Efficient
subscriber handling can be done using operator provided business interfaces
towards vSAPC.
1.4.1 Packaging
The VNS Enterprise is built up by one Base package and five VPs. The VPs can
be added to the Enterprise Base package independent of each other.
For 16Q2 Enterprise Base, Tailored Broadband, HA & Pooling, Reporting &
Analytics, Convergence and Enterprise Lawful Intercept are available. The other
VPs are planned for future quarters.
The Enterprise Base Package is mandatory and contains the starting point for the
solution focusing on providing an easy and fast deployment of the VNFs to offer
mobile broadband access.
The benefit for the Enterprises is to reduce the time needed to introduce new
services and to flexibly assign them to their employees according to their own
changing enterprise business policies. The Package Enterprise Base covers
mobile access with some basic Policy and Charging control functionality as well
as basic Reporting capabilities. It provides differentiated QoS and IP access
control per user/employee or groups of users/employees for both Enterprise
services and the internet.
The benefit for the operator is the ability to provide solutions to the Enterprise
market segment in a cost efficient way. More advanced solutions can be offered
with the VPs.
The VNFs may be deployed in the operator’s premises (Data center deployment)
or in the Enterprise premises (distributed deployment).
• Auto provisioning
1.4.1.3 VP Convergence
For the scope of16Q2 the VP Convergence adds Wi-Fi and CDMA access. The
connectivity for this access can be from the vEPG to the vWMG in an Ericsson
Network Integrated Wi-Fi solution or to the respective physical Wi-Fi and
CDMA nodes.
VP HA & Pooling is added when the customer has High Availability (HA)
requirements. With the current function set, the virtual core network availability
is increased, but in some failure cases the devices will have to reconnect.
Reporting & Analytics mainly refers to EBM (Event Based Monitoring) EDR,
Top URI and vSAPC SOAP notifications. The VP Reporting & Analytics is
applicable to any VNF included from another package.
This Value Package applies for the vEPG and the vSGSN-MME.
SACC is a reference business solution where vEPG and vSAPC are central parts.
It can be found in both the product catalog and in the internal CPI. The solution
contains a collection of supporting material including presentations, solution
guidelines, technical descriptions and configuration guidelines.
Many of the topics and use cases for VNS Enterprise are covered by the SACC
solution and it is recommended to be familiar with it; both for getting ideas for
implementation of agreed use cases and new ones for sales.
The Enterprise subscribers will use the existing macro radio network. The
existing PNFs SGSN-MME and SGW are used for handling the mobility and
payload for the different radio accesses. Any access type can be supported and
may include indoor radio at the Enterprise site to improve coverage.
Depending on requirements the Enterprise could connect LTE radio dots to the
SGW function in the vEPG. This will improve possibilities for location based
policies. Otherwise the Operator SGW and Enterprise PGW will be used.
The Enterprise subscribers can be stored either in the vSAPC internal database or
in an external repository, as the CUDB. Note that when the Enterprise requires
real time cost control for Enterprise subscribers and the operator macro network
deploys the Ericsson Online Charging System (OCS), the Enterprise subscribers
could be stored in the Ericsson OCS instead.
The main reasons for including the SGSN-MME would be for protecting the
signaling from unauthorized access outside the Enterprise premises and to
minimize the impact, both performance and configuration, on the central MME
and DNS.
Typically, only the MME part of the SGSN-MME will be deployed. LTE radio
dots located at the Enterprise site are connected to the distributed MME and
SGW and using a designated Tracking Area. This will keep the payload and
signaling for the Enterprise users local to the distributed deployment.
The distributed MME only needs to communicate with the distributed SGW, and
vice versa. Only in case the vMME is not included in the distributed deployment
the vSGW needs to communicate with the operator SGSN-MME.
The SGSN part only needs to be deployed if the Enterprise has 2G/3G access
with a dedicated RNC or BSC.
When the subscriber leaves the premises a handover will be performed to the
Operator’s central MME or SGSN if it is 3G coverage.
The Enterprise subscribers can be stored either in the vSAPC internal database or
in an external repository, for example, in an Enterprise directory server.
MBB VNS provides a virtualized and telecom grade cloud ready, scalable MBB
system with a small footprint and hardware redundancy.
The MBB VNS can seamlessly coexist with the existing physical MBB and
comes with a number of benefits:
• It uses the benefits of the cloud technology, i.e. fast deployment with
instantiation, decommissioning and scaling.
For Tier 1 operators it is possible to dedicate compute resources per VNF and
create large scale MBB cloud deployment. For Tier 2 and Tier 3 operators a
combination of small footprint, simple scaling and Hardware redundancy is
achieved by collocating virtualized network functions
It is possible to combine the shared and dedicated resources approach, e.g. using
shared resources for a subset of VNFs and dedicated resources for another subset
of VNFs.
The physical and virtual packet core networks functions are managed in the same
way so existing OSS systems can be used for operating and maintaining also the
VNFs. To harmonize operation and maintenance between the physical and
virtualized packet core domains, OSS-RC/ENM will also collect PM and FM
data from the infrastructure domain (cloud management) per tenant and visualize
relevant information together with the application PM and FM data.
In the long run the requirements for the MBB VNS will be same as for an MBB
system built on physical nodes. In the short term, migration into a cloud MBB is
expected to happen gradually and triggered by certain uses cases (UCs).
1.5.2 Packaging
The same value packaging is applied to both VNFs and PNFs, i.e. same base and
value packages as for PNFs apply to the VNFs. The VNF deliverables are
specific and some exceptions for virtualized offering could exist. Which means it
is possible to select exactly which value packages to purchase per VNF and that
there is no VNS specific value packaging for MBB VNS. By default, Software
for all VNFs part of MBB VNS is included when purchasing MBB VNS. At
deployment time it is up to the customer to decide which VNFs to deploy in a
data center, that is all are optional. If no specific capacity licenses are purchased a
basic trial license is included for all VNFs.
VNF VP
Packages
VNF Base
Packages
The vEPC can be used to handle capacity growth for an existing MBB operator.
The vEPC can offload the existing EPC by:
Default introduction scenario for existing Ericsson EPC customers and part of
migration into full vEPC longer term.
The vEPC VNFs can be seamlessly integrated with the existing EPC and the
VNFs provide full feature parity with the PNFs with certain exceptions. The
vEPC has full external compatibility with 3GPP interfaces, which ensures no
impact on Radio access and devices.
The MNO has a Radio Access Network (RAN) and a packet core network. The
MNO can make part of its packet core capacity available to one or several
MVNOs as a business deal. The MNO needs to maximize network utilization and
capture new revenue streams.
• Security
A new LTE entrant or Greenfield operator is an operator that lacks the constraints
from a previous network and can choose to deploy the LTE network based on a
vEPC from the beginning.
1.5.3.5 VoLTE
The vEPG and vSAPC VNF can be used to create a network slice for VoLTE.
VoLTE has different characteristics than MBB and the expansion of the packet
core network can scale better when separating the two services. The network slice
can improve the quality and reliability of VoLTE through the separation from the
MBB network.
vEPC can provide a faster introduction of Wi-Fi Calling. Starting with a network
slice on a small scale for validation it can later expand as needed with the
subscriber growth, with minimum impact on the existing MBB network.
The Ericsson Network Integrated Wi-Fi (ENIW) Solution describes a scenario for
Trusted Wi-Fi Access. Trusted Wi-Fi access can be added to offload the 3GPP
RAN or provide indoor access where 3GPP RAN is poor. The vEPG and vWMG
can provide a smooth introduction with minimum interference to the existing
MBB network as a network slice. The solution can scale as the number of
subscribers increase.
vEPC and in particular VNS MBB contains traffic data (subscriber specific,
traffic type and destination specific, flow specific) that can be used as input to
SDN network that can increase the value proposition of vEPC and SDN and their
interworking.
Operator A
Virtual MVNO
MBB
Physical
Geographical
Operator B
Area
MBB
G/W/L/
Wi-Fi
Load-
MBB balancing
Set B
Load balancing is not strictly a network slice. Load balancing can be done on
several levels of the network to offload certain nodes in the network.
- As the new slice has a dedicated vEPC instance, the slicing provides
resource isolation. Thus introduction of an isolated network slice will
minimize the risk to impact existing operator services.
• Shorter TTM
- The operators are concerned about the time it takes to set up the
network for a new service. Slicing of the network for different
services/operator use cases provides separation of concerns that can
result in a faster setup of a network slice for a certain service as it is
separately managed and with limited impact on other slices.
1.6 Communication
Communication is one of several VNSs for vEPC.
Operators worldwide are currently launching IMS-based voice and video calling
services over LTE and Wi-Fi access.
The Wi-Fi Calling can be performed from SIM based mobile devices or multi-
device. It offers a simple solution for voice and video calls – one that is fully
integrated with modern smartphones and does not require any additional
application. As such, the introduction of Wi-Fi Calling is expected to have an
impact on the use of OTT VoIP solutions, as well as fixed telephony offerings.
The VNS Communication contains the market leading EPC to support end-to-end
VoLTE and Wi-Fi calling solution.
1.6.1 Packaging
The VNS Communication is built up by one Base package and four Value
Packages (VPs). The VPs can be added to the Communication Base package
independent of each other.
The Communication Base Package is mandatory and contains the starting point
for the solution focusing on providing an easy and fast deployment of the VNFs
to offer Voice over LTE and Wi-Fi calling services.
vEPC Communication
› Video Calling
Service Continuity Location Services
› Network Assurance and Efficiency
Network Assurance
Video Calling
and Efficiency
› Service Continuity
Communication Base Package
› Location Services
The benefit for the operator is to reduce the time needed to introduce new
communication services. In addition, the operator is able to provide the solution
in a cost efficient way.
The Package Communication Base covers mobile access with some basic voice
over LTE functionality as well as basic Wi-Fi calling capabilities. These include
for example:
• VoLTE support
• Seamless Inter Access Point handover for Wi-Fi Calling with same SSID
and subnet
• SRVCC
The VP Video Calling enables the operators to run video calls over the LTE or
Wi-Fi access. These include for example:
The VP Location Services makes the location based service and service control
possible. These include for example:
• Offline Charging for seamless handover between LTE & untrusted Wi-Fi
• End to end QoS for Voice over Wi-Fi over Managed Wi-Fi (Phase I)
• EBM Events
2 Summary
Intentionally Blank
4 Acronyms
DC Data Center
LB Load Balancer
VM Virtual Machine
5 Index
Service flexibility EPC Virtual network Virtual EPC, 8, 25, 26, 27, 28, 32, 84
services, 31 Virtual Machine, 21, 40, 41, 42, 46, 47, 48,
Service Models, 38 49, 51, 59, 66, 67, 80, 116
Storage, 34, 35, 39, 42, 52, 62, 67, 68, 78, Virtual Machine XE "Virtual Machine"
116 Manager, 116
Storage Area Network, 52, 116 Virtual Machines, 40, 59
Summary, 32, 81, 113 Virtual Network Function, 15, 20, 21, 25, 30,
The Distributed Management Task Force is 40, 41, 42, 43, 46, 47, 48, 49, 50, 51, 52,
an industry standards organization., 115 53, 55, 56, 57, 58, 61, 63, 67, 72, 73, 84,
The speed challenge from opportunity to 85, 86, 88, 91, 100, 103, 104, 107, 116,
launched solution, 30 120
Top of the Rack, 21, 48, 51, 116 Virtual Network Service, 40, 86, 90, 96, 98,
ToR, 116 99, 100, 103, 104, 105, 108, 109, 110, 116,
vEPC – Optimized with flexibility, 32 121
vEPC - Supporting key network services, 8 Virtualization, 10, 11, 12, 13, 34, 36, 41, 84,
vEPC Architecture Principal, 43 115, 116
vEPC Cloud support Ericsson and 3rd party Virtualized EPC VNF Descriptions, 43
execution environments, 28 Virtualized Infrastructure Manager, 16, 34,
vEPC Components and Architecture, 33 39, 42, 79, 116
vEPC Deployments, 53 VNF single server deployment:, 56
vEPC Functionality & Features, 18 vWMG, 25, 46, 74, 75, 76, 77, 99, 107
vEPC Network, 16, 50 What is vEPG?, 59
vEPC scaling dimensions, 54 What is vSAPC?, 69, 70
vEPC value Packages, 26 What is vSGSN-MME?, 63, 64
Virtual Deployment Unit, 42, 116
6 Table of Figures