Cp4153 Network Technology
Cp4153 Network Technology
Cp4153 Network Technology
UNIT-1
Networking Concepts
Let’s begin our networking journey with a discussion of what exactly a network is. A network is a system
of computers and other devices that are connected together via cabling or wirelessly for the purpose of
sharing resources, data, and applications. Network design can vary from a simple network of two
computers connected together to a vast network spanning multiple locations and even across continents
(such as the Internet).
There are two main types of networks that you should know about, and they each serve a similar purpose
but are configured and managed differently. Peer to peer networks (also called workgroups) were the first
type of network to be used. In this type of network, there is no centralized management or security and
each computer is in charge of its own local users and file and folder permissions. Since there is no
centralized user management, any user who wants access to resources on another computer will need to
have an account on that specific computer. So, if a user wants access to files on ten different computers,
then that user will need ten separate user accounts. Computers on a peer to peer network are usually
connected together through a simple hub or network switch
So, if Sally is a user on Computer A and she wants to access files on Laptop A and Computer C, then an
admin on Laptop A and Computer C will need to make a user account for her and then assign the
permissions she needs to be able to access those resources. You can imagine how complicated this would
get as the number of computers on the network grows. When the number of computers in a peer to peer
network starts to go past ten, then you can run into problems such as slowdowns from network broadcasts
and other traffic because all the traffic goes to each computer even though only the computer that it was
meant to go to will accept the information. Plus, many workgroup configured operating systems can only
accept ten concurrent connections at a time. So, if you have a computer acting as a file server for twenty
users, then only ten of them can connect to that file server at a time. Peer to peer networks work fine for
home networks or small office networks where there are not a lot of users and computers to manage. But
once you get to a certain limit that is where you need to implement something more, such as a client-
server network. A client-server network has clients (workstations) as well as a server (or many servers).
As you can see in figure 1.2, the clients are labeled Computer A, Computer B, Laptop A, and so on. There
is also a file server and a directory server, which is used to manage user accounts and access controls.
All the computers and servers connect to each other via a network switch rather than a hub like we
saw in the peer to peer network, even though you can use a switch for a peer to peer network as well. The
main advantage here is that every user account is created on the directory server, and then each computer,
laptop, and other servers are joined to a domain where authentication is centralized for logins and
resource permissions. A domain is a centralized way to manage computers, users, and resources, and each
computer joins the domain and each user is created as a domain user. So, if a user named Joe on computer
C wants to access files on Laptop B, they can do so assuming their user account is allowed to. There is no
need to make a user account for Joe on Laptop B or any other computer on the network besides the initial
user created on the directory server. Using a switch rather than a hub reduces broadcast traffic because the
switch knows what port each computer is connected to and doesn’t have to go to each computer or server
to find the one it is trying to get to. Switches can be thought of as "smart" hubs, or hubs can be thought of
as "dumb" switches. The client-server model is also very scalable, and the amount of concurrent
connections to a server is only limited to the licensing model in place and eventually the hardware limits
in regards to network bandwidth and server capacity. Directory servers can handle managing thousands of
users with little hardware resources needed.
• Single point of failure if a directory server goes down and none of your users can log in. This is often
bypassed by having multiple directory servers (or domain controllers).
Overall, if your environment has many resources and you want to centralize management of your users
and computers, then you should go with the client- server model. If you only have a few computers and
users, then a peer to peer configuration should work just fine. Plus, you can reconfigure it to a client-
server model in the future if needed.
Network Devices
When many people start learning about networking, they assume that a network only consists of
computers talking to each other. In some cases this is true, like in many small peer to peer home
networks, but in the real world there is usually much more to it.
• Computers
• Servers
• Printers
• Copiers
• Storage arrays
• Wireless access points\Wi-Fi routers Of course, there are also routers, switches, and firewalls
(which I will discuss in Chapter 2).
You can even consider your Smartphone and tablet as network devices because they are technically on
your network and are connected to your wireless access point to get their Internet connection. You will
probably hear me say this many times in this book, but the Internet is the biggest network in the world.
Network Terminology
There is a lot of terminology that goes along with networking, as well as acronyms for just about
everything. I will be going over many of them in this book, but for now I wanted to talk about some of the
more common terms that you might have heard of. Then we will get into the more complicated (and
exciting) stuff later. Many of these terms will be discussed later on, so I won’t go into too much detail just
yet.
Ethernet – A standard of network communication using twisted pair cable.
Server – A computer (or other device) that is used to store files or host an application.
Bandwidth – The capacity of a network communications link to transmit the maximum amount of data it
can from one point to another over a network connection.
Fiber Optic – A type of network cable that uses a super thin glass or plastic core to transmit data via a
light signal.
Network Card – A piece of hardware installed in a computer or other device that is used to transmit
network data from the network to the device itself.
Protocol – A set of rules used in network communication between devices needed to exchange
information correctly.
Packet – A small amount of data sent over a network that contains information such as the source and
destination address as well as the information that is meant to be transmitted.
Port – A number that identifies one side of a connection between two computers that is used to identify a
specific process.
ISP – An Internet Service Provider is the company you get your Internet connection from.
Cat5 Cable – Category 5 cabling is a standard of network cable that is of a certain type and speed rating.
Cat5 is considered to be outdated these days, and has been replaced by faster versions such as Cat6 and
Cat7.
DNS – The Domain Naming System translates hostnames to their IP addresses so we don’t need to
remember IP addresses when connecting to other network devices.
DHCP – The Dynamic Host Configuration Protocol is used to assign IP address to devices so they are
able to communicate on the network.
Network Speeds
When it comes to network hardware, not all network devices are equal when it comes to speed. And when
it comes to network performance, speed costs money. Network speeds have been increasing over the
years, and these faster devices are becoming more commonplace, so you will see them being used more
often. Network speeds are usually measured in Mbps (Megabits per second) and Gbps (Gigabits per
second) with 1Gbps equal to 1000Mbps. Back in the beginning days of networking, speeds were around
10Mbps. Now it’s common to see 10Gbps and even 40Gbps being used in modern datacenters. In order
for a network to be able to utilize the desired speed, the hardware and the cabling need to be able to
support that particular speed, otherwise the network will function at the speed of the slowest device on the
link. That doesn’t mean that if you have five computers with 10Gbps network cards and one with a
10Mbps network card that all the computers will function at 10Mbps. It will depend on the network
configuration itself.
OSI Model
Now it’s time for a little networking theory (I know you’re excited for this!). If you plan on having a
career in the networking field, then you will need to know about the Open Systems Interconnection
reference model. The OSI model was developed in the 1970s by the International Organization for
Standardization (ISO) to help standardize network technologies so computers from different
manufacturers could communicate with each other by making compatible network hardware, software,
and protocols. The OSI Model is divided up into seven layers or logical groupings that are grouped in a
hierarchical format. I will briefly go over each layer and its purpose next. If you want to turn yourself into
a super networking geek, then you should go out and find some resources that cover the OSI Model in
greater detail.
Application Layer – This is where users communicate and interact with the computer and use programs
(applications).
Presentation Layer – This layer presents data to the Application layer and translates the data as needed.
Session Layer – Here is where network sessions between the Presentation layer entries are set up,
managed, and dismantled.
Transport Layer – This layer takes data from the upper layers and combines it into a data stream
providing end to end data transport services.
Network Layer – Also known as layer 3, this is where device addressing and data tracking takes place.
Data Link Layer – Provides for the physical transmission of data and also takes care of flow control and
error notifications.
Physical Layer – This layer communicates with the actual communication media using bits which have a
value of either 0 or 1.
Packets, Frames, and Headers
Now that we know that networking involves transmitting data between two or more devices along a
network media such as a cable (or at least we should know), let’s now take a look at the actual data that is
formatted on its journey.
Packets
A network packet (also called a datagram) is a formatted unit of data that not only has the information
that is meant to be sent to its destination (the payload) but also contains control information such as the
source and destination address that is used to make sure the payload arrives where it was meant to go.
Networks that use packets to send data are called packet switched networks at the Network Layer (layer
3). Think of packets as chunks of data sent over the network so that congestion doesn’t take place because
of too much traffic on the media. A packet is created by the sending device and then is sent to the protocol
stack running on that device, where it is then sent out on the network via the networking hardware. Then
on the receiving end the packet is passed to the appropriate protocol stack and processed. There are some
other things that take place within this process as the packet traverses the OSI layers, but for the sake of
keeping it simple, let’s just say a packet goes from the source device to the destination device over the
network and contains the data that is meant to be delivered along with other data that help move the
process along. Figure 1.3 shows the most basic form of a packet and includes the source address,
destination address, type (tells the OS what kind of data the frame carries), the data, and then CRC
information. CRC stands for cyclic redundancy check, and it performs error checking functions. There are
other types of packets but we will mostly be concerned with IP packets for our discussion.
Frames
Frames are also considered to be a unit of data themselves, and operate at the Data Link layer (layer 2).
They are very similar to packets in the way the work is constructed, but their structure is different. Frames
are used to transport data on the same network and use source and destination MAC addresses (discussed
in Chapter 5) rather than IP addresses. A frame is sent over the network and an Ethernet switch checks the
destination address of the frame against a MAC lookup table in its memory. Then, it takes that
information to determine which port to send the data out so it reaches its intended destination. Figure 1.4
shows the components of a typical network frame.
Preamble – An alternating 1,0 patter that tells the receiving system that a frame is starting and enables
synchronization.
SFD (Start Frame Delimiter) – Denotes that the destination MAC Address field begins with the next
byte.
Data and Padding – Contains the payload data sent down from the Data Link Layer to the Network
Layer. Padding data is added to meet the minimum length requirement for this field, and can be 46 to
1500 bytes.
FCS (Frame Check Sequence) – The field at the end of the frame that contains a Cyclic Redundancy
Check (CRC) which allows for the detection of corrupted data.
Headers
Headers are used as part of the process of packaging of data for transfer over network connections. There
are two types of headers, and they are TCP and UDP (TCP and UDP are discussed in Chapter 6). TCP
headers contain 20 bytes while UDP headers contain 8 bytes. Think of a header as information that helps
prepare an end device for additional, more specific information. A header contains addressing and other
data that is required for a packet to reach its intended destination. So, without a header, your data won’t
make it to where you want it to go.
Collision Domain
A collision domain is a term used to describe a group of networked devices on a network segment where
one of those devices sends out a packet on the segment and all of the other devices on that segment are
forced to pay attention to it. But if another device sends out a packet at the same time, then you will have
a collision, which will require the packet to be resent. Back in the old days of hubs, if one computer sent
out a packet, then every other computer connected to that hub or even to other hubs on that segment
would be forced to listen to the transmission. Collisions would occur if more than one device transmitted
at the same time. Today’s switches help alleviate collisions by making each port its own collision domain,
greatly reducing the amount of collisions. These days collisions are also greatly reduced by using full
duplex connections, where the network devices are able to send and receive at the same time.
Broadcast Domain
A broadcast domain is a term used to describe a group of networked devices on a network that hears all
broadcasts that have been sent on all segments. A broadcast is where a device sends out a message to
every device on the network. This is another reason we use switches instead of hubs, to reduce the size of
our broadcast domains and prevent broadcast storms. As you can see in figure 1.5, we have two networks
connected to our router by switches. Each port on the switch connects to a computer and is considered a
collision domain. Then all of the computers connected to that switch are part of a broadcast domain.
Routers won’t pass broadcast traffic, so we don’t need to worry about the computers on Network 2 getting
broadcast traffic from Network
A LAN is a network that spans one location such as your home, office, or even an entire building. Some
people also consider networks that span a location like a school campus a LAN. The network devices can
be connected via cabling or by using a wireless connection. The network may consist of one network
segment or multiple network segments with different IP address ranges assigned to them. To use different
IP address ranges on a network requires the use of a layer 3 device like a router or layer 3 switches to
allow the segments to communicate with each other. (IP addresses will be discussed in Chapter 5 and
routers and switches will be discussed in Chapter 2.) A WAN, on the other hand, is a network that
extends over a large geographical distance such as between cities, states, or even countries. Since it’s not
possible to use standard networking media such as Ethernet cables or Wi-Fi connections, WANs rely
more on leased lines from entities like phone companies and cable providers to cover the larger distances.
These connections usually consist of long fiber optic cable runs designed for long distance
communications
. Network Adapter
Let’s start this discussion with one of the most basic (and cheapest) components, which would be the
network adapter (figure 2.1). Network adapters are used as the interface between the network and the
computer, server, printer, or whatever device that the adapter is installed in. A computer (etc.) can have
several network adapters, and network adapters can come with several network ports on one adapter.
Common ports for these adapters include RJ45, which is used for Ethernet cables, and SFP, SC, LC, and
GBIC connections for fiber optic cables. The cable connects to the port on the back of the card and then
the card itself plugs into a slot on the motherboard in the computer.
Many desktop computers have built-in network adapters that are integrated with the motherboard similar
to the way laptops are designed. This doesn’t mean that you can’t upgrade your network adapter with a
faster model or one with more ports
. Hub
Next I want to briefly mention a network device that is pretty much obsolete, but you might run into one
of them at some point in your networking career. Hubs (figure 2.2) are devices with four or more ports
(usually not more than sixteen ports) that simply transfer the packets from the network cable out all the
ports. Hubs are layer 1 devices and also act as repeaters, meaning they regenerate the signal, allowing it to
travel further down the cable than it would if it was just a straight cable run.
The problem with hubs is that they are “dumb”, meaning they don’t know anything about any of the
devices on the network and that’s why they just forward all the traffic out all the ports, which is also
known as broadcasting. This is why they are not practical on larger networks and usually only seen on
small home networks.
Switch
Network switches (figure 2.3) are what we use instead of hubs when we want to build a network the right
way. Switches have built-in intelligence and some more so than others. Switches operate at layer 2, and
can also operate at layer 3 assuming you have the right type. Layer 3 switches can also perform routing
like a router does (discussed next).
The way a switch functions is by keeping track of the MAC addresses of devices that pass traffic through
it. MAC addresses are the burned in hardware address that every network device has, and will be
discussed in more detail in Chapter 5. This MAC address information is stored in MAC filter tables (also
known as Content Addressable Memory tables). By referencing these tables they avoid broadcasting all
network traffic out all ports whenever some device on the network decides it wants to start
communicating. Once a MAC address is stored on the switch with its associated\connected port number,
the switch will only pass traffic through that port when traffic comes into the switch for a device with a
known MAC address. One downside is when the switch is powered off the information on the MAC filter
table is lost and it has to be rebuilt, as connections are made after the switch is turned back on.
Router
At layer 3 of the OSI Model we have routers, which perform a variety of functions to keep data flowing
on the network. The purpose of a router (figure 2.4) is to “route” traffic from one network to the other. If
you want to communicate with a computer or other device on a network that is different from your own,
then the traffic will need to be routed between them. Routers use IP addresses to forward traffic while
switches rely on MAC addresses to take care of their traffic. Layer 3 switches can also route traffic
between different networks while also performing switching functions. They are not a replacement for
routers, but are instead used internally for different network segments within your organization. When
some data (network packet) comes in on one of the ports on the router, the router reads the network
address information in the packet to determine its destination. Then, using information in its routing table
or policy, it sends the packet to the next network to ultimately get to its final destination. It might have to
go through several routers (with each step called a hop) to get to its destination. Routing tables are
databases stored in RAM that contain information about directly connected networks. These tables can be
updated and maintained dynamically (automatically) by routing protocols (discussed in Chapter 6) or
statically (manually). Routing tables will keep track of information such as the network address assigned
to interfaces, subnet masks, route sources, destination network, and the outgoing interface etc.
So, the bottom line is that if you have different networks (such as 192.168.1.0 and 192.168.2.0) and want
to send data between the two, then you will need a router to do so, or at least a layer 3 switch that can
perform routing functions.
Firewall
When people think of a firewall they think of it as something to keep the bad guys out of their computer.
While this is certainly true, firewalls can do much more than just offer protection from the outside world.
Firewalls can be a hardware based device (like in figure 2.5), or they can also run as software on a
computer on your network. If you are a Microsoft Windows user, you might have noticed that you have
the Windows Firewall installed on your computer, and it is usually preconfigured to where you don’t have
to do much with it. These locally installed firewalls are called host firewalls, and differ from a network
firewall because they just protect the specific host that they are installed on.
As for hardware (or network) firewalls, they come in different varieties from thebasic home unit to the
expensive types (like shown in figure 2.5). Theseenterprise level firewalls are designed to perform various
functions including:
You would place your hardware firewall between your internal network and the Internet to protect your
network from outside threats and to filter traffic and services so that you can control what comes in and
out of your network and what your end users can see and do. Figure 2.6 shows you the basic concept of
how a firewall sits between the Internet and your internal network and controls access to data and
services.
Now that you have an overview of the main hardware used with networking, let’s put it all together in one
diagram (figure 2.7). The configuration can vary depending on your needs. For example, you can have a
router between the Internet and the firewall, and we can assume that the switches are layer 3 switches
since they are “routing” traffic between two different networks.
What is an IP Address?
There is an old yet still widely used analogy comparing IP addresses to streetaddresses that comes in
handy when you’re first getting into the concept of network addressing where you compare computer IP
addresses to house street addresses and packets to packages or mail. If the postal carrier has a package
that needs to be delivered to your house, they will need to know your address in order to get it there. And,
of course, your address is unique to your street, so you can think of your house as your computer and your
street as the network segment, with your computer having a unique address of its own on the network.
Right now we are mostly using IPv4 IP addresses for public (external) and private (internal) addressing.
IPv4 addresses are 32 bit binary numbers that have four octets which can contain values from 0-255. An
example of a common, privately used address is 192.168.1.20. You might see something like this
assigned to your home computer from your broadband modem or wireless router. These types of IP
addresses can be reasonably easy to remember depending on how many you have. If your network
devices are on the same network or subnet, then each device will have a similar address, such as
192.168.1.30 and 192.168.1.32, so you only need to memorize the last number (octet) for each device.
Then again, there is subnetting and address classes (discussed later) where it's not that simple, and you
may have a more complicated address scheme going on, especially in larger networks with multiple
subnets in multiple locations.
Subnet Masks
IP addresses have another component to them that determines what part of the IP address belongs to the
network and what part identifies the host. This address (called a subnet mask) is also 32 bits like an IP
address and is represented in four octets separated by periods. An example of a common subnet mask
would be 255.255.255.0. There are default subnet masks for the A, B, and C classes of networks, which
and I will be discussing classes later in this chapter. But, for now, here are the defaults for each class.
• Class A = 255.0.0.0
• Class B = 255.255.0.0
• Class C = 255.255.255.0
The way you determine what part of the subnet mask is for the network and what part is for the hosts is by
converting the address to its binary number and using the 1s for network addresses and 0’s for host
addresses. So in our 255.255.255.0 example it would translate into
11111111.11111111.11111111.00000000, with all of the addresses in the first three octets belonging to
the network and all of the last octet belonging to the hosts.
Default Gateway
One very important part of your computer’s IP configuration is the default gateway. This is the IP address
that is used when traffic from your network needs to leave the local network and get to a different
network. The default gateway is usually the interface of your router or other layer 3 device that is used to
pass traffic in and out of your local network. And by local network I mean the network that your
computer is a part of. You may have just one network, or you can have many networks that are all
connected together by routers or layer 3 switches.
As you can see in figure 4.1, when Computer 1 wants to communicate with Computer 4, it has to use its
default gateway of 192.168.1.1 to get out of its local network (Network 1) and get to Computer 4, which
is located in Network 2. Then the router will use its routing table or a routing protocol (discussed in
Chapter 6) to determine where Computer 4 is located and pass the traffic along to its destination.
You can assign a default gateway manually, or your DHCP server can do it for you. DHCP is usually a
better option in large networks to avoid a lot of manual labor and potential mistakes. (DHCP is discussed
later in this chapter.)
As you can see in my example, my IP address is 192.168.0.2 and my subnet mask is 255.255.255.0. If
you want to get more detailed information, including your default gateway address, computer’s host
name, DHCP server address, etc., then use the same command but add the /all switch to the end so it
looks like ipconfig /all. As you can see from figure 4.3, you get much more detailed information with the
command
You can also find your IP address using the GUI (graphical user interface), butthe command prompt
method is much faster. For other operating systems you can use their associated commands. For example,
Linux and Apple computers use the ifconfig command rather than ipconfig command.
IP addresses can be assigned statically (manually) or dynamically (automatically) by a DHCP server, but
if you are the type that likes to do things yourself, then it’s easy to assign an IP address to your computer
manually. Before doing so you just need to make sure you have all the appropriate information handy,
otherwise you could be looking at communication problems. Once again I will stick to the Microsoft
Windows method of manually assigning an IP address, and this time we will do it from the GUI. Keep in
mind that this method may vary a little depending on what version of Windows you are
running. What you need to do is go into the Windows Control Panel and find Network and Sharing
Center. Then click on the link on the left that says Change adapter settings. Then find the network adapter
that you are currently using in the list. You might only have one, and if that’s the case it makes things
easier. Next you should right click on the appropriate adapter and choose Properties, and from the
Networking tab click on Internet Protocol Version 4 and click Properties again. If your computer is set to
get its IP address automatically from a DHCP server, then your properties box will look similar to figure
4.4.
To manually assign an IP address to your computer click on the radio button that says Use the following
IP address and enter in the appropriate information for the IP address, subnet mask, and default gateway
(figure 4.5). Just make sure the IP address you use is not in use on your network by any other device. For
the DNS settings you will need to find out what DNS servers are being used on your network (DNS is
covered in Chapter 6).
If your network is using DHCP for IP address distribution, then you can leave the DNS settings set to
automatic while having the IP settings set to manual, and you will get the DNS settings configured
automatically for you.
DHCP
Since I have mentioned DHCP a few times in this book I figure now is a good time to go into a little more
detail about what it is and how it works because your network will most likely be using DHCP whether
it’s a large network or a small one.
DHCP stands for Dynamic Host Configuration Protocol, and it was designed to simplify the management
of IP address configuration by automating this configuration for network clients. All computers that
participate on TCP/IP networks or the Internet need to have IP addresses assigned to them and have other
IP information configured. Some of the additional information needed by network clients may include a
subnet mask, default gateway, and DNS server information. This information is needed in order for the
computer to do things such as send data outside the network and resolve host names to IP addresses.
Rather than manually inputting all of this information on each client, DHCP can do this for you
automatically once it’s setup on the DHCP server. In order for DHCP to work, you need to have a device
acting as a DCHP server. This device can be a computer, router or other type of network device. The
DHCP server is configured with a range or ranges of IP addresses that can be used to give to clients that
request one. It can also be configured with other network parameters, as stated earlier. For a client to be
able to obtain information from a DHCP server, it must be DHCP enabled. When it is configured this
way, then it will look for a DHCP server when it starts up. This process will vary depending on what
implementation of DHCP is in use. For example, the Microsoft implementation of DHCP works as
follows:
➢ The client sends out a DHCPDiscover packet the first time the client attempts to log on to the
network.
➢ Then the DHCP server that receives the DHCPDiscover packet responds with a DHCPOffer
packet which contains an un-leased IP address and any additional TCP/IP configuration
information.
➢ When a DHCP client receives a DHCPOffer packet, it then responds by broadcasting a
DHCPRequest packet that contains the offered IP address, and shows acceptance of the offered IP
address.
➢ The selected DHCP server acknowledges the client DHCPRequest for the IP address by sending a
DHCPAck packet and then the client can access the network.
➢ DHCP clients try to renew their lease when fifty percent of the lease time has expired by sending
a DHCPRequest message to the DHCP server.
➢ They also send this message when they restart to try and get the same IP configuration back.
The amount of time a client keeps its lease on its IP address varies depending on how it is setup. The
default Microsoft duration is eight days, and most computers end up with the same IP address they had
before when it comes time to renew. If the client computer is setup to use DHCP to obtain its IP address
and cannot find a DHCP server, then it will most likely use an APIPA (Automatic Private IP Addressing)
address instead. When using APIPA, DHCP clients can automatically self-configure an IP address and
subnet mask for themselves when a DHCP server is not available. The IP address range used by APIPA is
169.254.0.1 through 169.254.255.254 with a class B subnet mask of 255.255.0.0. The client will use this
self-configured IP address until a DHCP server becomes available. So, if you are trying to configure your
new router at home and notice your IP address is 169.254.x.x when running the ipconfig command, then
it’s most likely because it can’t get an IP address from the router. With DHCP, you can also do things like
reserve an IP address for a specific computer or exclude a range of IP addresses so they will not be given
out to DHCP clients. Plus there are special settings called options where you configure things such as
your DNS and gateway (router) configurations so they are given to clients along with the IP address
settings.
As I mentioned earlier, there are two parts to an IP address you need to know about: the network portion
and the host portion. This will determine how many addresses are reserved for different networks and
how many are reserved for the hosts on those networks. There are designated classes (also called classful
addressing) that help to keep the network and host addresses in order, and these are the main ones.
Remember that octets are the numbers between the dots of the IP address, so it looks like
octet1.octet2.octet3.octet4.
Class A—The first octet is for the network address, and the last three octets are the host addresses. An IP
address that has a number between 1 and 126 in the first octet is a Class A address.
Class B—The first two octets are for the network address, and the last two octets are the host addresses.
An IP address that has a number between 128 and 191 is a Class B address.
Class C—The first three octets are for the network address, and the last octet is the host address. The first
octet range of 192 to 223 is a Class C address.
Class D—These are used for multicast addresses and have their first octets in the range of 224 to 239.
Class E—Reserved for future use and has the range of addresses in the first octet from 240 to 255 .
You might have noticed that the IP address starting with 127 is missing from the list. This is because its
reserved for loopback functionality meaning that datagrams sent to a network 127 address should loop
back inside the host. In other words the datagram will make a loop and return back to itself. This is used
for testing purposes and 127.0.0.1 is the most commonly used loopback IP address. When you use these
default classes of IP addresses it’s pretty straightforward, but when you get into subnetting where you are
dividing one network into two or more networks to adjust the available number of networks or hosts you
can use, then you start to get into what they call Classless Inter-Domain Routing, or CIDR. (Subnetting is
discussed later in this chapter.) With IPv4 there are about 4.3 billion total addresses, and the ones that are
publicly owned cannot be used in more than one place. Private IP address can be used in multiple
locations as long as they are not duplicated on the same internal network. We are officially out of public
IPv4 IP addresses, so now we will need to start implementing IPv6 in order to continue. IPv6 is already
being implemented, and though it has not been widely accepted yet, it will be soon enough.
You may or may not have heard the term ones and zeros when people talk about computers and
networking. What they are referring to is the binary numbering system which computers use to function
with a 1 digit representing on and a 0 digit representing off. Each of these 1s or 0s we use represent a bit,
and an IP address contains 32 bits. Since the highest value of an octet for an IP address is 255, we only
need to worry about knowing the binary values from 0-255. And the way we do this is by having a range
of standard numbers that we are used to, and then convert them to binary based on whether we need to
use that number or not. This range of numbers is as follows: 128 64 32 16 8 4 2 1 Now, in order to
determine which numbers are being used and which are not, we would put a 1 for the value of a number
we need to use and then a 0 for a number we don’t need to use. For example, if we needed to figure out
what the number 16 is in binary, we give the number 16 from the list above a value of 1 for on and the
rest of the numbers a 0, so it would look something like this: 128
64 32 16 8 4 2 1
0001000
So 16 in binary = 00010000. That is a pretty simple example, so let’s try something a little more
complicated. What would 207 be in binary? Just because it’s a larger number doesn’t mean it has to be
any harder. Simply calculate it the same way, but this time use the numbers that are required to come up
with 207 and start from the left.
128 64 32 16 8 4 2 1
11001111
Now if we apply the same math to an entire IP address, we will get its binary equivalent. Let’s use one of
the IP addresses that I have been using so far, which was 192.168.1.20, and convert it to binary.
192 = 11000000
168 = 10101000
1 = 00000001
20 = 00010100
11000000.10101000.00000001.00010100
As you can see, the concept of binary numbers is not really difficult, but things Nstart to get more
interesting (and complicated) when you get into Classless Inter- Domain Routing (CIDR) and subnetting.
CIDR is an IP addressing scheme that allows us to customize the allocation of IP addresses. Think of it as
a replacement for the original A, B, and C classful scheme that was mentioned earlier in this chapter.
CIDR was designed to extend the life of IPv4 by allowing us a method to conserve IP addresses until we
are ready for IPv6. Going back to our original class A, B, and C scheme, as you can see we are stuck with
set numbers of network and hosts IDs per network class.
Now if you count the number of 1’s in the subnet mask, you will get your slash format number.
Class A - 11111111.00000000.00000000.00000000 = /8
CIDR borrows bits from the host portion of the subnet mask to be used for the network mask which
allows you to make adjustments as to how many possible networks and hosts you can have on a network.
So if you take a class C subnet mask and borrow two of the host bits to increase your amount of network
bits, it would look like this: 11111111.11111111.11111111.11000000 = /26 As you can see, now it’s a
/26 because we have 26 1s in the subnet mask. This is where subnetting comes into play, so let’s start that
discussion now.
Basic Subnetting
In order to solve the problem mentioned above about having a limited set of networks and hosts when
using the classful IP addressing scheme, we now use subnetting to get the results we need to make our
network design work for us. I am going to give a brief overview of subnetting because there is so much to
it that you can write a book on it. It’s beyond the scope of this book, otherwise it would be called
Networking Made Difficult! Before we begin, let’s look at the class A, B, and C network (N) and host (H)
defaults for subnet masks.
Class A network
N.H.H.H 255.0.0.0
Class B network
N.N.H.H 255.255.0.0
Class C network
N.N.N.H 255.255.255.0
To determine the number of networks and hosts that can be used with a particular subnet mask, you need
to do a little math and use your binary skills. It’s also a good idea to know your power of 2s when doing
the calculations. (Or you can just use a subnet calculator and not worry about it!) One thing you have to
be aware of is what class of IP address you are subnetting because that will determine where you start
counting the network bits to be used. Let’s use a class A address, which uses the 4 th octet for subnetting.
For our example we have a class C IP address with a subnet mask of 255.255.255.224. You might have
noticed that the subnet mask is different than the standard class C 255.255.255.0 subnet mask. You might
remember how 255.255.255.0 converts to binary.
255.255.255.0 = 11111111.11111111.11111111.0 or /24 because of the 24 1s. In that case, our subnet
mask of 255.255.255.224 converts as follows:
128 64 32 16 8 4 2 1
11100000
128 + 64 + 32 = 224
We have taken three bits from the host portion of the address, since we originally had 24 and now have 27
1s for our network portion of the subnet mask. To determine how many networks and how many hosts we
can get from this subnet mask, we will start our calculations. To get the number of available networks you
take the number of network bits (1s) used for the particular type of address (11100000). In our case it’s 3,
because we are using the last octet for subnetting purposes, and take that number to the power of 2. So we
have 2
3 = 8 networks.
5 = 32.
But we can’t use the first and last address in a network because .0 represents the network address itself
and .255 is the broadcast address, so, in reality, to get the number of hosts per network we use 2
5- 2 = 30.
With the /27 subnet masks we end up with 8 different networks with 30 hosts allowed per network rather
than the default class C subnet mask, giving us 2,097,152 possible networks with 254 hosts on each one,
which more than any one person or company would ever use. This comes in handy for places like ISPs
who have to give out public IP addresses, but don’t want to waste any at the same time. You can use
subnetting on your internal network to change the number of networks or hosts that you have available to
fine tune your IP address scheme. Here is a shortcut chart you can use to match the number of bits turned
on for each subnet mask number. It’s a good idea to know the numbers from 128 to 255
.0 = 00000000
.128 = 10000000
.192 = 11000000
.224 = 11100000
.240 = 11110000
.248 = 11111000
.252 = 11111100
.254 = 11111110
.255 = 11111111
VLANs
As networks started getting larger, with more hosts being added to these networks, it became necessary to
break these networks up to cut down on the amount of broadcast domains in a switched network.
Segregating networks with routers internally is not practical, so this is where VLANs come into play. A
virtual LAN is a logical grouping of network devices and resources connected to ports on a switch that
have been designated for that virtual LAN. This allows you to have smaller broadcast domains when
using layer 2 switches. By using VLANs, you can create specific network segments or subnets for groups
of users such as finance and keep their traffic together while separating them from other groups of users
such as sales or marketing. To do this, let’s say you have a 48 port switch and you assign the finance
department a VLAN number of 200. On ports 1-12 you make them members of VLAN 200 and connect
finance users to any of the ports from 1-12 so they will be able to communicate with each other. Then you
can assign other ports to different VLANs for different purposes like in figure 4.6
But what if you want some computers on one VLAN to be able to talk to computers on a different
VLAN? This is where you will need to implement Inter-VLAN Routing or IVR. IVR is a way to route
traffic from one VLAN to another, but to do this you will need to use switches that have layer 3 (routing)
capabilities (unless you plan on having to use routers on your network, which will just add complexity
and added cost). Another hurdle to VLANs is the fact that you may need to get traffic from these
VLANs to other parts of your network that are separated by switches. Sure you can route traffic from
VLAN to VLAN on the same switch using IVR, but what about from one switch to another? To
accomplish this we need to implement VLAN Trunking Ports. These ports will pass VLAN traffic from
one switch to another through one port. The trunk port in figure 4.7 can carry traffic from VLAN 10, 20,
30, and 40.
I have been talking about IP addresses quite a bit and have kept the discussion to version 4, but now it’s
time to look to the future and talk about version 6 (or IPv6, as it’s known). Right now we are mostly using
IPv4 IP addresses for public (external) and private (internal) addressing, and as you know, IPv4 addresses
are 32 bit binary numbers that have four octets that can contain values from 0-255. Once again, an
example of a commonly used private IP address is 192.168.1.20, and you might see something like this
assigned to your home computer from your broadband modem or wireless router. These types of IP
addresses can be reasonably easy to remember depending on how many you have, and if your network
devices are on the same network or subnet, then each device will have a similar address (such as
192.168.1.30 and 192.168.1.43), so you only need to memorize the last number (octet) for each device in
this example. With IPv4 there are about 4.3 billion total addresses, and the ones that are publicly owned
cannot be used in more than one place. Private IP address can be used in multiple locations as long as
they are not duplicated on the same internal network. We are officially out of public IPv4 IP addresses, so
now we will need to start implementing IPv6 in order to continue. IPv6 is already being implemented, but
has not been widely accepted yet. However, it will be soon enough.
There are ways to shorten how an IPv6 address is written, making it easier to wrap your head around the
number and also to avoid getting something wrong when making a note of an address. One thing you can
do is remove any leading 0's in the address to shorten it a bit. So, our example address of
2605:e000:7ec8:0800:006f:10f6:1394:0370 Can then be shortened to
2564::5fb7:5a2:1557:b3:f9c4. There are three main types of IPv6 addresses, and they are Link-Local
Address, Global Unicast Address, and Unique-Local Address. Here is the difference between them:
➢ Link-Local Address – This is an auto-configured IPv6 address that always starts with FE80, and
are used on only broadcast segments and are never routed. They refer to a specific physical link
and are used for addressing on a single link for things like automatic address configuration and
neighbor discovery protocol.
➢ Global Unicast Address – These addresses are globally identifiable and uniquely addressable.
They consist of a 64 bit subnet ID and a 64 bit interface ID.
➢ Unique-Local Address – These addresses are globally unique, but they should be used in local
communication only and always start with FD. Use these for devices that will never communicate
on the Internet.
IPv6 also has some other advantages\features that IPv4 doesn't, making it a worthy replacement after all
the hard work is done. Some of these advantages include the following:
MAC Addresses
Even though MAC address and an IP address are not the same thing, I want to mention MAC addresses
here since they still apply to networking, and they are used in communications between computers when
they are on the same subnet. A MAC (Media Access Control) address is a unique identifier that is
assigned to a network device by the manufacturer. It’s a 48 bit hexadecimal number and looks like 12-00-
15-B7-46-92. They are also known as a hardware addresses, burned in addresses (BIA), or physical
addresses. Every network adapter, switch port, wireless card, etc. has its own unique MAC address. MAC
addresses are used at the Data Link Layer (layer 2) of the OSI Model, and are used for network
communications between devices on the same subnet where no routing needs to be performed. As I
mentioned before, switches use MAC filter tables (also known as Content Addressable Memory tables) to
store information about known devices and their MAC addresses. Then when some type of network
communication needs to happen, the switch can send the data directly to the device because it knows
exactly which port on the switch that device is connected to. These table entries will expire eventually,
and if the switch loses power then they will have to be repopulated over time once it’s powered back on.
THROUGHPUT
Types of throughput
Historically, throughput has been a measure of the comparative effectiveness of large commercial
computers that run many programs concurrently. Throughput metrics have adapted with the evolution of
computing, using various benchmarks to measure throughput in different use cases.
An early throughput measure was the number of batch jobs completed in a day. More recent measures
assume either a more complicated mixture of work or focus on a particular aspect of computer operation.
Units like trillion floating-point operations per second (teraflops) provide a metric to compare the cost of
raw computing over time or by manufacturer.
In data transmission, network throughput is the amount of data moved successfully from one place to
another in a given time period. Network throughput is typically measured in bits per second (bps), as in
megabits per second (Mbps) or gigabits per second (Gbps).
• the amount of data that can be received and written to the storage medium; or
• the amount of data read from media and returned to the requesting system.
Storage throughput is typically measured in bytes per second (Bps). It can also refer to the number of
discrete input or output (I/O) operations responded to in a second, or IOPS.
Throughput applies at higher levels of IT infrastructure as well. IT teams can discuss databases or other
middleware with a term like transactions per second (TPS). Web servers can be discussed in terms of
pageviews per minute.
Throughput also applies to the people and organizations using these systems. Independent of the TPS
rating of its help desk software, for example, a help desk has its own throughput rate that includes the
time staff spend on developing responses to requests.
Several related terms -- throughput, bandwidth and latency -- are sometimes mistakenly interchanged.
Network bandwidth refers to the capacity of the network for data to be moved at one time. Throughput
expresses the amount of data. Latency refers to the speed at which data is transmitted.
Bandwidth is the capacity of a wired or wireless network communications link to transmit the maximum
amount of data from one point to another over a computer network or internet connection in a given
amount of time -- usually one second. Synonymous with capacity, bandwidth describes the data transfer
rate. Bandwidth is not a measure of network speed -- a common misconception.
While bandwidth is traditionally expressed in bits per second, modern network links have greater
capacity, which is typically measured in millions of bits per second (Mbps) or billions of bits per second
(Gbps).
Throughput is necessarily lower than bandwidth because bandwidth represents the maximum capabilities
of a network rather than the actual transfer rate.
Network latency is an expression of how much time it takes for a data packet to get from one designated
point to another. In some environments, latency is measured by sending a packet that is returned to the
sender -- the round-trip time is considered the latency. Ideally, latency is as close to zero as possible.
• Hardware issues. Routers and other devices are antiquated or experience faults.
• Traffic. If network traffic is heavy, it can result in packet loss. Contributors to network latency
include the following:
• Propagation. This is the time it takes for a packet to travel between one place and another at the
speed of light.
• Transmission. The medium itself (whether optical fiber, wireless or something else) introduces
some delay, which varies from one medium to another. The size of the packet introduces delay in
a round trip because a larger packet takes longer to receive and return than a short one. Also,
when a repeater is used to boost signals, this too introduces additional latency.
• Router processing. Each gateway node takes time to examine and possibly change the header in
a packet -- for example, changing the hop count in the time-to-live field.
• Computer and storage delays. Within networks at each end of the journey, a packet might be
subject to storage and hard disk access delays at intermediate devices, such as switches and
bridges. Backbone statistics, however, probably don't consider this kind of latency.
SNMP is an application-layer protocol used to manage and monitor network devices and their functions.
SNMP provides a common language for network devices to relay management information within single-
and multivendor environments in a local area network (LAN) or wide area network (WAN). The most
recent iteration of SNMP, version 3, includes security enhancements that authenticate and encrypt SNMP
messages, as well as protect packets during transit.
WMI is a set of specifications from Microsoft for consolidating the management of devices and
applications in a network from Windows computing systems. WMI provides users with information about
the status of local or remote computer systems. It also supports the following actions:
• setting and changing permissions for authorized users and user groups;
Tcpdump is an open source command-line tool for monitoring, or sniffing, network traffic. Tcpdump
captures and displays packet headers and matches them against a set of criteria. It
understands boolean search operators and can use host names, IP addresses, network names and protocols
as arguments.
Wireshark is another open source tool that analyzes network traffic. It can look at traffic details, such as
transmit time, protocol type, header data, source and destination. Network and security teams often use
Wireshark to assess security incidents and troubleshoot network issues.
DELAY.
A network delay is the amount of time required for one packet to go from its source to a destination. It is
also called the end-to-end delay, and it comprises the following 4 types of delays:
• Transmission delay
• Propagation delay
• Queuing delay
• Processing delay
Transmission delay The transmission delay is the time from when the first bit of a file reaches a link to
when the last bit reaches the link. The transmission delay is calculated as the size of the file divided by the
data rate of the link.
Propagation Delay
The propagation delay is the amount of time a bit on the link needs to travel from the source to the
destination, where the speed is dependent on the medium of communication.
Queuing Delay
If a packet arrives at its destination and the destination is busy, it will not handle that packet immediately.
Instead, the packet has to wait in the buffer of the switch, which is called the queuing delay. This delay
depends on the following factors:
Processing Delay
The processing delay is the time taken by a processor to process the data packet. This delay depends on
the speed of the processor.
Queuing and processing delays do not have any calculable formulas because they are dependent on the
speed of the processor.
UNIT-II
Wireless Network
MIMO (multiple inputs, multiple output) is an antenna technology for wireless communications in which
multiple antennas are used at both the source (transmitter) and the destination (receiver). The antennas at
each end of the communications circuit are combined to minimize errors, optimize data speed and
improve the capacity of radio transmissions by enabling data to travel over many signal paths at the same
time.
Creating multiple versions of the same signal provides more opportunities for the data to reach the
receiving antenna without being affected by fading, which increases the signal-to-noise ratio and error
rate. By boosting the capacity of radio frequency (RF) systems, MIMO creates a more stable connection
and less congestion.
The 3rd Generation Partnership Project (3GPP) added MIMO with Release 8 of the Mobile Broadband
Standard. MIMO technology is used for Wi-Fi networks and cellular fourth-generation (4G) Long-Term
Evolution (LTE) and fifth-generation (5G) technology in a wide range of markets, including law
enforcement, broadcast TV production and government. It also can be used in wireless local area
networks (WLANs) and is supported by all wireless products with 802.11n.
MIMO is often used for high-bandwidth communications where it's important to not have interference
from microwave or RF systems. For example, it's frequently used by first responders who can't always
rely on cell networks during a disaster or power outage or when a cell network is overloaded.
Wi-Fi 6 -- also known as 802.11ax -- raised the bar for wireless connectivity by introducing several new
technologies to help eliminate the limitations associated with adding more Wi-Fi devices to a
network. Wi-Fi 7 is currently in development with an expected release in 2024.
Before MIMO, there were other types of advanced antenna technology with different configurations --
most commonly, multiple input, single output (MISO) and single input, multiple output (SIMO). MIMO
builds on these technologies.
MIMO is one of the most common forms of wireless, and it played a key role in the deployment of LTE
and the wireless broadband technology standard Worldwide Interoperability for Microwave Access
(WiMAX>). LTE uses MIMO and orthogonal frequency-division multiplexing (OFDM) to increase
speeds up to 100 megabits per second (mbps) and beyond. These rates are double what was offered in
previous 802.11a Wi-Fi. LTE uses MIMO for transmit diversity, spatial multiplexing (to transmit
spatially separated independent channels), and single-user and multiuser systems.
MIMO in LTE enables more reliable transmission of data, while also increasing data rates. It separates the
data into individual streams before transmission. During transmission, the data and reference signals
travel through the air to a receiver that will already be familiar with these signals, which helps the
receiver with channel estimation.
MIMO continues to upgrade and grow through its use in massive new applications, as the wireless
industry works to accommodate more antennas, networks and devices. One of the most prominent
examples of this is the rollout of 5G technology.
These massive 5G MIMO systems use numerous small antennas to boost bandwidth to users -- not just
transmission rates as with third-generation (3G) and 4G cellular technology -- and support more users per
antenna. Unlike 4G MIMO, which uses a frequency division duplex (FDD) system for supporting
multiple devices, 5G massive MIMO uses a different setup called time division duplex (TDD). This offers
numerous advantages over FDD (see image below).
Beam forming is an RF management technique that maximizes the signal power at the receiver by
focusing broadcast data to specific users instead of a large area. With 5G, three-dimensional (3D) beam
forming forms and directs vertical and horizontal beams at the user. These can reach devices even if
they're at the top of a high-rise, for example. The beams prevent interference with other wireless signals
and stay with users as they move throughout a given area.
SU-MIMO vs. MU-MIMO
There are two primary types of MIMO: single-user (SU) and multiuser (MU). In SU-MIMO systems, data
streams can only interact with one device on the network at a time. MU-MIMO systems, therefore,
outperform SU-MIMO.
Issues arise with SU-MIMO when many users attempt to use the network simultaneously. If one person is
uploading video and another is conferencing, the data stream will choke, causing latency, or delays, to
skyrocket. On the other end of the spectrum, MU-MIMO has the advantage of being able to stream
multiple data sets to multiple devices at a time.
There are various possible configurations for these MIMO systems, with 2x2, 4x4, 6x6 and 8x8 being the
most common. 5G massive systems manipulate these configurations to enable extensive network
capacity.
MIMO's primary advantages
In its various configurations, MIMO has a number of advantages over MISO and SIMO advanced antenna
technologies:
• MIMO enables stronger signals. It bounces and reflects signals so a user device doesn't need to be
in a clear line of sight.
• Video and other large-scale content can travel over a network in large quantities. This content
travels more quickly because MIMO supports greater throughput.
• Many data streams improve visual and auditory quality. They also decrease the chance of lost
data packets.
• High network capacities. Data travels to more users through the deployment of 5G New
Radio (5G NR). MU-MIMO and 5G NR enable more users to access data at the same
frequency and time rates.
• More coverage. Users can soon expect high-speed data wherever they are, even at the
edge of service areas. Using 3D beam forming, the coverage adapts to the user's
movement and location.
• Better user experience (UX). Watching videos and uploading content is easier and faster.
Massive MIMO and 5G technology transform UX.
5G New Radio (NR) has been designed to fully support Massive MIMO as a native
technology from the start. The vastly increased coverage, capacity and user throughput that Massive
MIMO provides has quickly made it a natural and essential component of cellular network deployments.
Introduction 6G Network
A 6G network is defined as a cellular network that operates in untapped radio frequencies and uses
cognitive technologies like AI to enable high-speed, low-latency communication at a pace multiple times
faster than fifth-generation networks. 6G networks are currently under research and development, yet to
be released.
6G is the sixth-generation mobile system standard currently being developed for wireless communications
over cellular data networks in telecommunications. It is the successor, or the next bend in the road, after
5G and will likely be much faster.
The International Telecommunication Union (ITU) standardizes wireless generations every decade.
Typically, they are denoted by a gap in the “air interface,” which signifies a shift in transmissions or
coding. This is implemented so that older devices cannot be updated to the newer generation since doing
so would generate a limitless quantity of “noise” and “spectrum pollution.”
Typically, subsequent generations (i.e., the next G) use much more sophisticated digital encoding that
outdated computers cannot achieve. They depend on broader airwave bands that governments did not
previously make accessible. Additionally, they have immensely complex antenna arrays that were
previously impossible to construct. Today, we are in the fifth generation. The first standard for 5G New
Radio (NR) was developed in 2017 and is presently being implemented globally.
According to a report titled “6G The Next Hyper-Connected Experience for All,” the ITU will start work
in 2021 to create a 6G mission statement. The standard will likely finish by 2028 when the first 6G
devices are available. Around 2030, deployment will be close to ubiquitous.
The exact working of 6G is not yet known, as the specification is yet to be fully developed, finalized, and
released by the ITU. However, depending on previous generations of cellular networks, one can expect
several core functionalities. Primarily, 6G will operate by:
• Making use of free spectrum: A significant portion of 6G research focuses on transmitting data
at ultra-high frequencies. Theoretically, 5G can support frequencies up to 100GHz, even though
no frequency over 39GHz is currently utilized. For 6G, engineers are attempting to transfer data
across waves in the hundreds of gigahertz (GHz) or terahertz (THz) ranges. These waves are
minuscule and fragile, yet there remains a massive quantity of unused spectrum that could allow
for astonishing data transfer speeds.
• Improving the efficiency of the free spectrum: Current wireless technologies permit
transmission or reception on a specific frequency at the same time. For two-way communication,
users may divide their streams as per frequency (Frequency Division Duplex or FDD) or by
defining time periods (Time Division Duplex or TDD). 6G might boost the efficiency of current
spectrum delivery using sophisticated mathematics to transmit and receive on the same frequency
simultaneously.
• Taking advantage of mesh networking: Mesh networking has been a popular subject for
decades, but 5G networks are still primarily based on a hub-and-spoke architecture. Therefore,
end-user devices (phones) link to anchor nodes (cell towers), which connect to a backbone. 6G
might use machines as amplifiers for one another’s data, allowing each device to expand
coverage in addition to using it.
• Integrating with the “new IP:” A research paper from the Finnish 6G Flagship initiative at the
University of Oulu suggests that 6G may use a new variant of the Internet Protocol (IP). It
compares a current IP packet in IPv4 or IPv6 to regular snail mail, complete with a labeled
envelope and text pages. The “new IP” packet would be comparable to a fast-tracked courier
package with navigation and priority information conveyed by a courier service.
6G will rely on the selective use of different frequencies to evaluate absorption and adjust wavelengths
appropriately. This technique will leverage the fact that atoms and molecules produce and absorb
electromagnetic radiation at certain wavelengths, and the emissions and absorption frequencies of any
particular material are identical.
As mentioned earlier, The commercial debut of 6G internet is anticipated to go live around 2030-2035. In
addition to the ITU, the Institute of Electrical and Electronics Engineers (IEEE), a non-profit society for
technology standardization, ratifies this dateline in its peer-reviewed paper titled “6G Architecture to
Connect the Worlds.”
The paper states, “2030 and beyond will offer a unique set of challenges and opportunities of global
relevance and scale: We need an ambitious 6G vision for the communications architecture of the post-
pandemic future to simultaneously enable growth, sustainability as well as full digital inclusion.”
While there have been some preliminary conversations to characterize the technology, 6G research and
development (R&D) efforts began in earnest in 2020.
The 6G Flagship initiative combines studies on 6G technologies across Europe. Japan is committing $482
million to the expansion of 6G in the next few years. The country’s overarching objective is to showcase
innovative wireless and mobile technologies by 2025. In Russia, the R&D institution NIIR and the
Skolkovo Institute of Science and Technology produced a 2021 estimate predicting the availability of 6G
networks by 2035.
American mobile providers are advancing their individual 6G innovation roadmaps. Importantly, AT&T,
Verizon, and T-Mobile are spearheading the Next G Alliance, an industry initiative. In May 2021, the
Next G Alliance initiated a technical work program to develop 6G technology.
Why is 6G necessary?
Given that the ink is yet to fully dry on 5G deployments (and even 4G penetration remains low in remote
regions), one may ask why 6G efforts are necessary. Its primary focus is to support the 4thIndustrial
Revolution by building a bridge between human, machine, and environmental nodes.
In addition to surpassing 5G, 6G will have a range of unique features to establish next-generation wireless
communication networks for linked devices by using machine learning (ML) and artificial intelligence
(AI). This will also benefit emerging technologies like smart cities, driverless cars, virtual reality, and
augmented reality, in addition to Smartphone and mobile network users.
It will combine and correlate different technologies, like deep learning with big data analytics. A
substantial correlation between 6G and high-performance computing (HPC) has been observed. While
some IoT and mobile data may be processed by edge computing resources, the bulk of it will require
much more centralized HPC capacity — making 6G an essential component.
8 Unique Features of 6G
6G networks may coexist with 5G for a while and will be a significant improvement over previous
generations in several ways. This is because 6G will offer the following differentiated features:
Spectrum is an essential component of radio connections. Every new generation of mobile devices
requires a pioneer spectrum to fully leverage the advantages of any further technological advancement.
Reframing the current digital cellular spectrum from legacy technologies to the next generation will also
be a part of this transformation.
For urban outdoor cells, the newest pioneer spectrum slabs for 6G are anticipated to be in the mid-bands
7-20 GHz. This would offer larger capacity via extreme Multiple Input Multiple Output (MIMO), low
bands 460-694 MHz for extensive coverage, and sub-THz spectrums (between 90 GHz and 300 GHz) for
peak data speeds surpassing 100 Gbps.
5G-Advanced will extend 5G beyond data transfer and significantly enhance localization accuracy to
centimeter-level precision. Localization will be pushed to the next level by 6G’s use of a broad spectrum,
including new spectral ranges of up to terahertz.
5G is scheduled to offer a peak data throughput of 20 Gbps and a user-experienced data rate of 100 Mbps.
However, 6G will deliver a maximum data rate of 1 Tbps. Similarly, it will raise the data rate experienced
by the user to 1 Gbps. Therefore, the spectral efficiency of 6G will be nearly more than double that of 5G.
Higher spectral efficiency will offer many users instantaneous access to modern multimedia services.
Network operators must redesign their current infrastructure frameworks to enable higher spectral
efficiency.
The latency of 5G will be lowered to just one millisecond. Many real-time applications’ performance will
be enhanced by this ultra-low latency. However, wireless communication technology of the sixth
generation will decrease user-experienced latency to less than 0.1 milliseconds. Numerous delay-sensitive
real-time applications will have better performance and functionality due to this drastic reduction in
latency.
Additionally, decreased latency will allow emergency response, remote surgical procedures, and
industrial automation. Furthermore, 6G will facilitate the seamless execution of delay-sensitive real-time
applications by making the network 100 times more dependable than 5G networks.
While 5G addresses both human users and Internet of Things (IoT) use cases, 6G will focus more on
M2M connectivity. Today’s 4G networks support around 100,000 devices per square kilometer. 5G is
significantly more advanced, enabling the connectivity of one million devices per square kilometer. With
the advent of 6G networks, the target of 10 million linked devices per square kilometer is within reach.
All 6G networks will include mobile edge computing, although it must be added to current 5G networks.
By the time 6G networks are implemented, edge and core computing will be increasingly assimilated as
elements of a unified communication and compute infrastructure framework.
As previously discussed, 6G networks will require stronger radio frequencies to meet the requirement for
greater bandwidth. However, one of the challenges is that the foundational (chip) technology cannot (yet)
function energy-efficiently in these frequency ranges. Therefore, optimizing power consumption will be a
key focus area for 6G developers. Currently, researchers intend to reduce the energy consumption per bit
to lower than one nanojoule (10-9 joules), as per the peer-reviewed paper titled “From 5G to 6G
Technology: Meets Energy, Internet-of-Things and Machine Learning: A Survey.”
The 5G-led Ultra-Reliable Low-Latency Communication (URLLC) service will be further developed and
enhanced in 6G. Reliability might be enhanced through simultaneous transmission, numerous wireless
hops, device-to-device connectivity, and AI/ML. Consequently, 6G will be better than 5G in terms of
network penetration and stability. In addition, 6G will optimize M2M interactions by increasing network
dependability by greater than a hundredfold and decreasing error rates by tenfold compared to previous
generations.
5G represents the first solution designed to replace wired connections in corporate and industrial settings.
It is deploying services-led architecture in the core foundation and cloud-native deployments, which will
be expanded to portions of the radio access network (RAN). It is also anticipated that 6G networks will be
implemented in heterogeneous cloud settings, including a combination of private, public, and hybrid
clouds with a suitable architecture to support this.
5G will allow artificial intelligence (AI) and machine learning (ML) technologies to achieve their full
potential. Eventually, AI/ML will be implemented in various network components, network levels, and
network services. From refining beam forming in the radio tier to planning at the cell site with self-
optimizing networks, AI/ML will assist in achieving superior efficiency at reduced computational
complexity.
6G developers, such as Nokia Bell Labs, want to adopt a blank slate approach to AI/ML, allowing AI/ML
to determine the optimal method of communication between two endpoints.
Advantages of 6G Networks
1. Enforces security
Cyber attacks are increasingly focusing on networks of various types. The sheer unpredictability of these
attacks necessitates the implementation of robust security solutions. 6G networks will have safeguards
against threats like jamming. Privacy concerns must be addressed when creating new mixed-reality
environments that include digital representations of actual and virtual objects.
2. Supports personalization
Open RAN is a fresh and evolving technology that 5G utilizes. However, OpenRAN will be a mature
technology for 6G. The AI-powered RAN will allow operators of mobile networks to provide users with a
bespoke network experience based on real-time user data gathered from multiple sources. The operators
may further exploit real-time user data to provide superior services by personalizing quality of experience
(QoE) and quality of service (QoS). The operators may customize several services using AI.
This degree of bandwidth and responsiveness will enhance 5G application performance. It will also
broaden the spectrum of capabilities to enable new and innovative wireless networking, cognition,
monitoring, and imaging applications. Using orthogonal frequency-division multiple access (OFDMA),
6G access points will be able to serve several customers at the same time.
The network will become a repository of situational data by collecting signals reflected from objects and
detecting their type, shape, relative position, velocity, and possibly material qualities. Such a sensing
method may facilitate the creation of a “mirror” or digital counterpart of the actual environment. When
combined with AI/ML, this information will provide fresh insights into the physical world, thereby
rendering the network more intelligent.
6G will benefit society as a whole since new technological innovations will emerge to support it. This
includes:
• More advanced data centers: 6G networks will generate significantly more data when compared
to 5G networks, and computation will evolve to ultimately encompass edge and core platform
coordination. As a result of these changes, data centers will need to develop.
• Nano-cores that replace traditional processor cores: Nano-cores are anticipated to develop as a
single computing core that combines HPC and AI. It is not necessary for the nano-core to be a
tangible network node. Instead, it might consist of a conceptual aggregation of computing
resources shared by several networks and systems.
Among the many advantages of 6G networks is their vast coverage area. This implies that lesser towers
are necessary to cover a given amount of space. This is useful if you want to construct towers where it
showers regularly or where trees and vegetation abound. Additionally, 6G is intended to support
additional mobile connections beyond 5G. This implies that there will be reduced interference between
devices, resulting in improved service.
The majority of cellular traffic today is produced indoors, yet cellular networks were never built to
properly target indoor coverage. 6G overcomes these obstacles using femtocells (small cell sites) and
Distributed Antenna Systems (DASs).
Abstract
This paper proposes a vehicle-to-vehicle (V2V) communication protocol which makes it possible to
discover and share traffic status information in a novel, efficient and comprehensive way. The protocol is
specifically designed to work in an environment without infrastructure where all the vehicles (nodes) can
talk to each other (ad-hoc network) and collaboratively generate new knowledge relevant to the traffic
conditions existing at that moment in an urban environment. The nature of such a network demands self-
configuration and autonomous behavior. The protocol adheres to these principles and makes it possible
for the nodes to initiate discovery and determine the location of areas where specific traffic conditions
apply. The proposed “Single Ripple” algorithm determines these areas by only involving vehicles with
the desired conditions and their neighbors. The algorithm imposes only a minimal load onto the wireless
network.
Keywords
• Traffic information
• ad-hoc networks
• area discovery
• ubiquitous networking
• vehicle-to-vehicle communication
• V2V
LIN is a vehicle communication protocol that uses a single master to achieve a superior cost-performance
ratio, used in switch input and sensor input actuator control. Flex Ray is a high-speed communication
protocol that provides a high degree of flexibility and reliability.
These 7 types of connectivity are also commonly referred to under the umbrella term of 'Vehicle to
Everything' (V2X).
Network Slicing
5G network slicing: How to simplify and monetize through machine learning automation
Network slicing is a network architecture that enables the creation of multiple specialized and virtualized
networks on a common shared infrastructure. Each of these network ‘slices’ can be monetized based on
the specific needs and requirements of different sets of users and enterprise customers. As such, network
slicing is critical to enabling the new revenue opportunities presented by 5G.
The challenges of 5G network slicing
While operators will indeed find many benefits and opportunities from network slicing, it’s important to
note that it’s not without its challenges:
• Slices that operate in a multi-vendor and multi-technology environment present great complexity.
• The slicing ecosystem is dynamic with slice requests often coming in on-demand.
How the Qualcomm Edgewise Suite can help manage network slicing
The Qualcomm Edgewise Suite presents the Radio Access Network (RAN) Slice Controller. This is an
independent and vendor agnostic RAN Network Slice Subnet Management Function (NSSMF) solution,
which interfaces with any Network Slice Management Function (NSMF) and any underlying RAN
vendor through standardized open application programming interfaces (APIs).
Qualcomm Edgewise Suite covers the full slice management lifecycle: from the requested order, to
creation and activation (where Qualcomm Edgewise Suite raps execute orchestration), service level
agreement (SLA) monitoring, modification for performance optimization, and reporting. Furthermore,
when the slice is no longer needed, Qualcomm Edgewise Suite also performs slice de-activation and
termination.
As shown in the image below, when a new slice allocation is requested by the NSMF, the Qualcomm
Edgewise Suite RAN Slice Controller receives the request (including the new slice characteristics),
updates the slice inventory, and launches the slice creation flow.
This flow incorporates a series of predefined raps that are activated by the Recipe Orchestrator:
• First, the Feasibility Check rap evaluates whether there are enough RAN resources that can
accommodate the new slice based on the slice request profile.
• Upon a successful feasibility check, the Recipe Orchestrator triggers the second rApp.
• This is the Slice Configuration rap, which then utilizes the slice profile to configure the network
with the required slice parameters, as based on the operator’s business logic.
• Once the slice is successfully configured, the Resource Allocator rap is then executed.
With advanced machine learning, the Qualcomm Edgewise Suite RAN Slice Controller accurately
translates slice throughput requirements into physical resource blocks and provisions them into the
network. Once the slice configuration flow is complete, the Qualcomm Edgewise Suite Recipe
Orchestrator activates the slice, and reports back to the NSMF.
At this stage the Recipe Orchestrator allocates a monitoring instance that continuously tracks slice
performance per slice, while regulating respective SLA commitments specific to each customer. This
way, operators can be sure that no network element is overburdened with traffic, and subscribers are
enjoying the services they want at the performance levels they expect.
To achieve seamless, efficient and effective RAN slicing, enterprises can rely on Qualcomm Edgewise
Suite to deliver these key capabilities:
• Flexible Slice Designer that meets band, layer, macro/micro/indoor configuration requirements
and more.
• Predictive slice admission that is driven by artificial intelligence and machine learning.
• Slice reconfiguration for existing slices in cases where new slices cannot be added.
• Root Cause Analyzer that enables the RAN NSSMF to identify the type of resolution apps that
help regulates the customer’s SLA requirements.
• Slice optimization for a multi-slice network with much different slice profile attributes, for
improving performance of slice violations, while maintaining existing slices in the same cluster.
In what instances can the real world benefit from Qualcomm Edgewise Suite?
Residential fixed wireless service: In suburban and rural areas that can’t be served by fiber, fixed
wireless access (FWA) is great for delivering broadband internet access. However, for residential areas,
FWA can place a great burden on the cellular network. With the Qualcomm Edgewise Suite RAN Slice
Controller, operators can alleviate the burden with simplified network slicing that distributes the load and
puts within reach customers that deserve high-quality service.
The mobile edge: The proliferation of autonomous vehicles and Internet of Things-powered devices,
such as drones, has accelerated the need for delivering broadband services with continuous connectivity
and minimized latency. But enabling the mobile edge with low latency can be very challenging. As such,
edge computing is another prime case for network slicing with Qualcomm Edgewise Suite, which helps
operators to bring the future of RAN technology today.
With a powerful combination of capabilities, the Qualcomm Edgewise Suite RAN Slice Controller
simplifies 5G networks slicing, while enabling operators to create slices for any RAN vendor and to
connect to any slicing orchestrator, so they can accelerate time-to-market and capture the 5G
monetization opportunity.
C-RAN
A C-RAN is an evolution of the current wireless communication system and uses the latest common
public radio interface standard, coarse or dense wavelength-division multiplexingtechnology and
millimeter wave (mmWave) transmission for long distance signals. The C in C-RAN can stand
for centralized or collaborative.
A RAN establishes a connection or communication between base stations and end users. In the C-RAN
architecture, baseband units (BBUs) relocate from individual base stations to a centralized control and
processing station often referred to as a BBU hotel.
The BBU hotel connects to the network with high-speed optical fiber and maximizes the distance between
cells. This type of cloud computing environment operates on open hardware and network interface
cards that dynamically handle fiber links and interconnections within the station.
C-RANs are significant in the future progression of wireless technology, such as 5G and the internet of
things. With easier deployment and scaling capability, the transition from Long-Term Evolution to 5G
networks will rely heavily on C-RAN development. It also provides a cost-effective, manageable
approach to support more users
Components of C-RAN
2. A remote radio unit (RRU) network. Also known as a remote radio head, an RRU is a
traditional network that connects wireless devices to access points.
3. A front hauls or transport network. Also known as a mobile switching center, a front haul or
transport network is the connection layer between a BBU and a set of RRUs that use optical fiber,
cellular or mmWave communication.
Advantages of C-RAN
• uses cloud computing open platforms and real-time virtualization to dynamically allocate shared
resources between BBUs.
Spectrum Management
The invisible radio frequencies that wireless signals travel over are referred to as spectrum.
While radio spectrum or wireless spectrum refers to the entire range of wireless communication frequency
bands used, which ranges from 1 Hz to 3000 GHz (3 THz), the process of managing the use of radio
frequencies in order to encourage efficient utilization and obtain a net social benefit is known as spectrum
management.
Importance of spectrum management for wireless network
The spectrum is unique because it is both non-exhaustible and non-storable as an economic resource.
Technological advancements such as wireless communication in IoT, high spectrum 5G, etc., are
becoming the primary way of connecting businesses and families to phone, data, and media services.
Therefore, effective RF spectrum management can greatly impact a country’s prosperity.
Innovation potential
The examples in the preceding section demonstrate how new technologies make use of frequency bands.
It’s worth noting that these technologies also provide technical breakthroughs that make the existing
spectrum more efficient.
Spectrum management is crucial for governments to make the most use of a limited public resource. With
an increasing demand for spectrum, competition for certain frequency bands will intensify, making
efficient use of that spectrum even more vital.
• Enable the development and deployment of new technologies within flexible frameworks
Emerging countries are facing bigger communication and information policy and regulatory change
concerns. Therefore, it is becoming clear that spectrum wireless services surpass landline connectivity,
and the spotlight focuses on present spectrum management methods.
Effective spectrum management policy should promote the rollout of services and the removal of
obstacles to entry and innovation in a globalizing society with rapid technological innovation and
increasing demand for radio spectrum frequencies.
The wireless spectrum is allocated for specific uses, and spectrum managers develop specific technical
and service rules to govern those allocations. Spectrum management consists of four primary areas of
work:
Spectrum planning: Spectrum planning entails allocating specific portions of the frequency spectrum to
certain users. This is done according to international agreements, technical characteristics and possible
uses of various parts of the spectrum. National goals and policies are also taken into consideration.
Spectrum authorization: This entails authorizing access to the spectrum resource to various types of radio
communication equipment under certain circumstances and radio operator certification.
Spectrum engineering: The establishment of electromagnetic compatibility standards for equipment that
emits or is vulnerable to radio waves is known as spectrum engineering.
Spectrum monitoring: Spectrum monitoring and compliance include keeping track of how the radio
spectrum is being used and putting in place controls to prevent unlawful use.
As a starting point, economically efficient spectrum wireless network utilisation involves maximising the
value of outputs created from the available spectrum. These include valuing public outputs provided by
the government or other public authorities.
At its most fundamental level, technically efficient spectrum use is making the most of all
available spectrum frequencies. The occupancy and data rate are two indicators of technical efficiency.
Time can be used to gauge technical efficiency by determining how consistent or heavy spectrum
utilization is over time. For a certain amount of spectrum capacity, data can measure how information
may be transferred.
The following are the key trends driving the future usage of spectrum in wireless communication,
outlining some of the primary themes:
There are more bands that can currently be used without a spectrum licence in several countries and the
usual Wi-Fi license-exempt channels. They are; 24 GHz band, 60 GHz (V-band), and from 71 GHz
onwards (E-band). Small operators and community networks could use these frequencies to provide
“fiber-like” access. The fast adoption of a license-free wireless spectrum in the form of Wi-Fi
demonstrates the power of frictionless innovation and the unmet demand for affordable internet access.
• Specific regulators have expanded the usage of frequency spectrum used in wireless
communication, such as the 11 GHz for fixed PtP backhaul links. This is due to reduced
detrimental interference from antennas that may focus wireless communication along relatively
narrow beams/paths.
• An analysis of the economic cost of the underutilized spectrum and measures to promote its usage
might aid in making a case for this. This could be a particularly effective technique for
preventing the digital gap from widening due to the future 5G spectrum and wireless
communication in IoT assignments.
FAQs:
The majority of countries regard the radio frequency spectrum as a state-owned asset. The United Nations
International Telecommunication Union (ITU) constitution fully recognizes every country’s sovereign
authority to govern its telecommunication.
For the non-governmental application of the spectrum, the Federal Communications Commission (FCC)
is in charge. For governmental applications, the National Telecommunications and Information
Administration (NTIA) manage it.
The spectrum frequencies are broken down into numerous frequency bands, each with its own set of uses.
The wireless frequency bands 300 kHz to 535 kHz, for example, are designated for aeronautical and
maritime communications. This is referred to as “allocation.”
The spectrum frequencies for telecommunications start at 800 MHz and run all the way up to 2300 MHz.
After that, there are the unlicensed bands, which are used for technology like Wi-Fi and Bluetooth. Wi-Fi
used to operate at 2.4GHz (2400MHz), but it has begun to migrate to higher frequency ranges.
Telecom spectrum management is the process of regulating radio frequency use in order to promote
efficient use of spectrum wireless service, allowing us to conduct phone calls, use social media etc., on
our phones.
In India, The National Radio Regulatory Authority, the branch of the Ministry of Communications, is
responsible for Frequency Spectrum Management.
As new technologies enable a wider range of applications to utilize a wider range of frequency bands,
spectrum management maximizes the use of finite spectrum resources. It fulfils the potential consumer
benefits of these new technologies and broader societal and economic purposes. Effective management of
conflicting spectrum demands is required, with the overall goal of expanding access to a connection. The
parameters of spectrum management are:
Spectrum licenses are given out by the government and are owned by corporations in the
telecommunications industry.
In the US, according to a Wall Street Journal article, AT&T, Dish, and T-Mobile, in a Federal
Communications Commission (FCC) auction, purchased spectrum 5G licenses in the midrange 3.45GHz
to 3.55GHz band (WSJ). Verizon Wireless paid $3.6 billion for spectrum licensing.
In India, Airtel and Vodafone Idea have worked with Ericsson and Nokia to test 5G spectrum
technologies and use cases, while Jio is testing its own 5G RAN and core.
Cognitive Radio
What is cognitive radio in mobile communication system?
Cognitive radio (CR) is a form of wireless communication in which a transceiver can intelligently detect
which communication channels are in use and which are not.
The cognitive radio network (CRN), an instrumental part of the next-generation wireless communication
systems, is mainly dependent on spectrum sensing to function properly. The radio spectrum can help in
clean energy transition and load capacity factors by providing a more efficient and accurate spectrum
utilization.
Functions. The main functions of cognitive radios are: Power Control: Power control is usually used for
spectrum sharing CR systems to maximize the capacity of secondary users with interference power const
What are the examples of cognitive radio networks?
Spectrum sensing (SS) is a concept of cognitive radio systems at base transceiver stations that can find
the white space i.e. licensed spectrum owned by primary users (PU), for transmission over a wireless
network without any channel interference. rains to protect the primary users.
air interface
The air interface, or access mode, is the communication link between the two stations in mobile or
wireless communication. The air interface involves both the physical and data link layers (layer 1 and 2)
of the OSI model for a connection.
Also called a "radio interface," the air interface defines the frequency, channel bandwidth and
modulation scheme. For example, TDMA and CDMA modulation are used in GSM and CDMA cellular
networks respectively, while OFDMA is used for LTE. OFDMA was also used for WiMAX. See air card,
CDMA, TDMA, OFDMA, LTE and WiMAX.
One of the main air interfaces for 3G system is referred as wide band CDMA (WCDMA). It is one of
the air interface used with UMTS mobile communication standard which allows communication between
UE and Node B. Below table shows some of the key parameters of W-CDMA air interface.
The 5G air interface is a key part of the 5G system which will facilitate Enhanced Mobile Broadband and
Ultra Reliable Low Latency Communication, as well as the support of Massive MTC (Machine Type
Communications).
Chanel access
In telecommunications and computer networks, a channel access method or multiple access method
allows more than two terminals connected to the same transmission medium to transmit over it and to
share its capacity.
Channel access allows multiple users in the same system to share a given bandwidth allocation. However,
in some cases multiple systems will share the same bandwidth without any coordination, and this requires
interoperability of their different access techniques and communication designs.
What are the methods of channel modeling?
Channel modeling can be achieved through different techniques such as extensive measurements,
machine learning and ray tracing. Of particular significance to THz communication is Ray tracing [40].
The model at higher frequency bands, e.g., above 6 GHz, should maintain compatibility with the model at
lower frequency bands, e.g., below 6 GHz. 2) Broad Bandwidths: A new 5G channel model should have
the ability to support large channel bandwidths, e.g., 500 MHz to 4 GHz.
There are two main types of channels viz. time invariant channel and time varying channel depending on
the motion between Transmitter and Receiver. If transmitter and receiver is not moving and is fixed at one
location the channel is referred as time invariant channel.
UNIT –IV
1. Data Plane
2. Control Plane
Data plane: All the activities involving as well as resulting from data packets sent by the end-user
belong to this plane. This includes:
• Forwarding of packets.
• Segmentation and reassembly of data.
• Replication of packets for multicasting.
Control plane: All activities necessary to perform data plane activities but do not involve end-user
data packets belong to this plane. In other words, this is the brain of the network. The activities of the
control plane include:
➢ Making routing tables.
➢ Setting packet handling policies.
• Better Network Connectivity: SDN provides very better network connectivity for sales,
services, and internal communications. SDN also helps in faster data sharing.
• Better Security: Software-defined network provides better visibility throughout the network.
Operators can create separate zones for devices that require different levels of security. SDN
networks give more freedom to operators.
• Better Control with High Speed: Software-defined networking provides better speed than other
networking types by applying an open standard software-based controller.
In short, it can be said that- SDN acts as a “Bigger Umbrella or a HUB” where the rest of other
networking technologies come and sit under that umbrella and get merged with another platform to
bring out the best of the best outcome by decreasing the traffic rate and by increasing the efficiency of
data flow.
• Enterprises use SDN, the most widely used method for application deployment, to deploy
applications faster while lowering overall deployment and operating costs. SDN allows IT
administrators to manage and provision network services from a single location.
• Cloud networking software-defined uses white-box systems. Cloud providers often use generic
hardware so that the Cloud data center can be changed and the cost of CAPEX and OPEX saved.
1. SDN Applications: SDN Applications relay requests or networks through SDN Controller using
API.
2. SDN controller: SDN Controller collects network information from hardware and sends this
information to applications.
3. SDN networking devices: SDN Network devices help in forwarding and data processing tasks.
SDN Architecture
In a traditional network, each switch has its own data plane as well as the control plane. The control
plane of various switches exchange topology information and hence construct a forwarding table that
decides where an incoming data packet has to be forwarded via the data plane. Software-defined
networking (SDN) is an approach via which we take the control plane away from the switch and
assign it to a centralized unit called the SDN controller. Hence, a network administrator can shape
traffic via a centralized console without having to touch the individual switches. The data plane still
resides in the switch and when a packet enters a switch, its forwarding activity is decided based on the
entries of flow tables, which are pre-assigned by the controller. A flow table consists of match fields
(like input port number and packet header) and instructions. The packet is first matched against the
match fields of the flow table entries. Then the instructions of the corresponding flow entry are
executed. The instructions can be forwarding the packet via one or multiple ports, dropping the
packet, or adding headers to the packet. If a packet doesn’t find a corresponding match in the flow
table, the switch queries the controller which sends a new flow entry to the switch. The switch
forwards or drops the packet based on this flow entry.
• Application layer: It contains the typical network applications like intrusion detection, firewall,
and load balancing
• Control layer: It consists of the SDN controller which acts as the brain of the network. It also
allows hardware abstraction to the applications written on top of it.
• Infrastructure layer: This consists of physical switches which form the data plane and carries
out the actual movement of data packets.
The layers communicate via a set of interfaces called the north-bound APIs (between the application
and control layer) and southbound APIs(between the control and infrastructure layer).
1. Open SDN
4. Hybrid SDN
1. Open SDN: Open SDN is implemented using the Open Flow switch. It is a straightforward
implementation of SDN. In Open SDN, the controller communicates with the switches using
south-bound API with the help of Open Flow protocol.
2. SDN via APIs: In SDN via API, the functions in remote devices like switches are invoked using
conventional methods like SNMP or CLI or through newer methods like Rest API. Here, the devices are
provided with control points enabling the controller to manipulate the remote devices using APIs.
3. SDN via Hypervisor-based Overlay Network: In SDN via the hypervisor, the configuration of
physical devices is unchanged. Instead, Hypervisor based overlay networks are created over the physical
network. Only the devices at the edge of the physical network are connected to the virtualized networks,
thereby concealing the information of other devices in the physical network.
4. Hybrid SDN: Hybrid Networking is a combination of Traditional Networking with software-defined
networking in one network to support different types of functions on a network.
Difference between SDN and Traditional Networking
Software Defined Network is a virtual networking A traditional network is the old conventional
approach. networking approach.
Software Defined Network is the open interface. A traditional network is a closed interface.
In Software Defined Network data plane and In a traditional network data plane and control
control, the plane is decoupled by software. plane are mounted on the same plane.
For more details you can refer differences between SDN and Traditional Networkingarticle.
Advantages of SDN
• The network is programmable and hence can easily be modified via the controller rather than
individual switches.
• Switch hardware becomes cheaper since each switch only needs a data plane.
• Hardware is abstracted, hence applications can be written on top of the controller independent of
the switch vendor.
• Provides better security since the controller can monitor traffic and deploy security policies. For
example, if the controller detects suspicious activity in network traffic, it can reroute or drop the
packets.
Disadvantages of SDN
• The central dependency of the network means a single point of failure, i.e. if the controller gets
corrupted, the entire network will be affected.
• The use of SDN on large scale is not properly defined and explored.
• Because the control plane is software-based, SDN is much more flexible than traditional
networking. It allows administrators to control the network, change configuration settings,
provision resources, and increase network capacity—all from a centralized user interface, without
adding more hardware.
SDN is a highly flexible, agile way to adapt to growing networking requirements and enable
automation and agility. By separating the network control and forwarding planes, SDN makes network
control a programmable entity and abstracts the infrastructure underneath.
Expert-Verified Answer An end-to-end model incorporating data centers and all the existing
devices. The system should have flexibility for open standards. A solution where there is no need to
replace existing infrastructure. A network partner with an ecosystem that supports a broad range of
applications.
Internet Engineering Task The Internet’s technical standards Interface to routing systems
Force (IETF) body. Produces RFCs and Internet (I2RS)
standards.
Service function chaining
Internet Research Task Force Research group within IRTF. SDN architecture
Organization Mission SDN- and NFV-Related
Effort
Metro Ethernet Forum (MEF) Industry consortium that promotes the Defining APIs for service
use of Ethernet for metropolitan and orchestration over SDN and
wide-area applications. NFV
Open Platform for NFV An open source project focused on NFV infrastructure
(OPNFV) accelerating the evolution of NFV.
Standards
Open standard
A standard that is: developed on the basis of an open decision-making procedure available to all
interested parties, is available for implementation to all on a royalty-free basis, and is intended to promote
interoperability among products from multiple vendors.
Standards-Developing Organizations
The Internet Society, ITU-T, and ETSI are all making key contributions to the standardization of
SDN and NFV.
Internet Society
An official national, regional, or international standards body that develops standards and coordinates the
standard activities of a specific country, region or the world. Some SDOs facilitate the development of
standards through support of technical committee activities, and some may be directly involved in
standards development.
The Internet Engineering Task Force (IETF) has working groups developing SDN-related specifications
in the following areas:
• Interface to routing systems (I2RS): Develop capabilities to interact with routers and routing
protocols to apply routing policies.
• Service function chaining: Develop architecture and capabilities for controllers to direct subsets
of traffic across the network in such a way that each virtual service platform sees only the traffic
it must work with.
The Internet Research Task Force (IRTF) has published Software-Defined Networking (SDN): Layers and
Architecture Terminology (RFC 7426, January 2015). The document provides a concise reference that
reflects current approaches regarding the SDN layer architecture. The Request For Comments
(RFC) also provides a useful discussion of the southbound API (Figure 3.3) and describes some specific
APIs, such as for I2RS.
A document in the archival series that is the official channel for publications of the Internet Society,
including IETF and IRTF publications. An RFC may be informational, best practice, draft standard, or an
official Internet standard.
IRTF also sponsors the Software Defined Networking Research Group (SDNRG). This group investigates
SDN from various perspectives with the goal of identifying the approaches that can be defined, deployed,
and used in the near term and identifying future research challenges.
ITU-T
ITU-T has established a Joint Coordination Activity on Software-Defined Networking (JCA-SDN) and
begun work on developing SDN-related standards.
• SG 11 (Signaling requirements, protocols, and test specifications): This group is studying the
framework for SDN signaling and how to apply SDN technologies for IPv6.
• SG 15 (Transport, access, and home): This group looks at optical transport networks, access
networks, and home networks. The group is investigating transport aspects of SDN, aligned with
the Open Network Foundation’s SDN architecture.
ETSI is recognized by the European Union as a European Standards Organization. However, this not-for-
profit SDO has member organizations worldwide and its standards have international impact.
ETSI has taken the lead role in defining standards for NFV. ETSI’s Network Functions Virtualisation
(NFV) Industry Specification Group (ISG) began work in January 2013 and produced a first set of
specifications in January 2015. The 11 specifications include an NFV’s architecture, infrastructure,
service quality metrics, management and orchestration, resiliency requirements, and security guidance.
Industry Consortia
Consortia for open standards began to appear in the late 1980s. There was a growing feeling within
private-sector multinational companies that the SDOs acted too slowly to provide useful standards in the
fast-paced world of technology. Recently, a number of consortia have become involved in the
development of SDN and NFV standards. We mention here three of the most significant efforts.
By far the most important consortium involved in SDN standardization is the Open Networking
Foundation (ONF). ONF is an industry consortium dedicated to the promotion and adoption of SDN
through open standards development. Its most important contribution to date is the OpenFlow protocol
and API. The Open Flow protocol is the first standard interface specifically designed for SDN and is
already being deployed in a variety of networks and networking products, both hardware based and
software based. The standard enables networks to evolve by giving logically centralized control software
the power to modify the behavior of network devices through a well-defined “forwarding instruction set.”
Chapter 4 is devoted to this protocol.
Consortium
A group of independent organizations joined by common interests. In the area of standards development,
a consortium typically consists of individual corporations and trade groups concerned with a specific area
of technology.
The Open Data Center Alliance (ODCA) is a consortium of leading global IT organizations dedicated to
accelerating adoption of interoperable solutions and services for cloud computing. Through the
development of usage models for SDN and NFV, ODCA is defining requirements for SDN and NFV
cloud deployment.
The Alliance for Telecommunications Industry Solutions (ATIS) is a membership organization that
provides the tools necessary for the industry to identify standards, guidelines, and operating procedures
that make the interoperability of existing and emerging telecommunications products and services
possible. Although ATIS is accredited by ANSI, it is best viewed as a consortium rather than an SDO. So
far, ATIS has issued a document that identifies operational issues and opportunities associated with
increasing programmability of the infrastructure using SDN and NFV.
There are a number of other organizations that are not specifically created by industry members and are
not official bodies such as SDOs. Generally, these organizations are user created and driven and have a
particular focus, always with the goal of developing open standards or open source software. A number of
such groups have become active in SDN and NFV standardization. This section lists three of the most
significant efforts.
Open Daylight
Open Daylight is an open source software activity under the auspices of the Linux foundation. Its member
companies provide resources to develop an SDN controller for a wide range of applications. Although the
core membership consists of companies, individual developers and users can also participate, so Open
Daylight is more in the nature of an open development initiative than a consortium. ODL also supports
network programmability via southbound protocols, a bunch of programmable network services, a
collection of northbound APIs, and a set of applications.
Open Daylight is composed of about 30 projects, and releases their outputs in simultaneous manner. After
its first release, Hydrogen, in February 2014, it successfully delivered the second one, Helium, at the end
of September 2014.
Open Stack
Open Stack is an open source software project that aims to produce an open source cloud operating
system. It provides multitenant Infrastructure as a Service (IaaS), and aims to meets the needs of public
and private clouds regardless of size, by being simple to implement and massively scalable. SDN
technology is expected to contribute to its networking part, and to make the cloud operating system more
efficient, flexible, and reliable.
Open Stack is composed of a number of projects. One of them, Neutron, is dedicated for networking. It
provides Network as a Service (NaaS) to other Open Stack services. Almost all SDN controllers have
provided plug-ins for Neutron, and through them services on Open Stack and other Open Stack services
can build rich networking topologies and can configure advanced network policies in the cloud.
The Data Plane is the network architecture layer that physically handles the traffic based on the
configurations supplied from the Control Plane. The Management Plane takes care of the wider network
configuration, monitoring and management processes across all layers of the network stack.
The data plane enables data transfer to and from clients, handling multiple conversations through multiple
protocols, and manages conversations with remote peers. Data plane traffic travels through routers, rather
than to or from them.
The control plane decides how data is managed, routed, and processed, while the data plane is responsible
for the actual moving of data. For example, the control plane decides how packets should be routed, and
the data plane carries out those instructions by forwarding the packets.
What Is a Control Plane?
The Control Plane is a crucial component of a network, tasked with making decisions on how data should
be managed, routed, and processed. It acts as a supervisor of data, coordinating communication between
different components and collecting data from the Data Plane.
• Segment traffic
While the Control Plane supervises and directs, the Data Plane is responsible for the actual movement of
data from one system to another. It is the workhorse that delivers data to end users from systems and vice
versa.
• Ethernet networks
• Wi-Fi networks
• Cellular networks
• Satellite communications
Data planes can also include virtualized networks, like those created using virtual private networks
(VPNs) or software-defined networks (SDNs). Additionally, data planes can include dedicated networks,
like the Internet of Things (IoT) or industrial control systems.
Data planes allow organizations to quickly and securely transfer data between systems. For example, a
data plane can enable the transfer of data between a cloud-based application and a local system. This
functionality can be beneficial for organizations that need to access data from multiple systems or that
need to quickly transfer large amounts of data.
By using dedicated networks, organizations can keep data secure through encryption, dedicated networks,
and access monitoring to prevent unauthorized access of data.
Data Plane vs. Control Plane: What Are the Key Differences?
The main differences between control and data planes are their purpose and how they communicate
between different systems. The control plane decides how data is managed, routed, and processed, while
the data plane is responsible for the actual moving of data. For example, the control plane decides how
packets should be routed, and the data plane carries out those instructions by forwarding the packets.
Along with doing different jobs, control planes and data planes exist in different areas. While the control
plane runs in the cloud, the data plane runs in the data processing area.
They also use different functions to do their jobs. A control plane use protocols to communicate between
different systems, mostly common routing protocols like BGP, OSPF, and IS-IS or network management
protocols like SNMP. These protocols enable the control plane to make decisions on how data should be
managed, routed, and processed.
Data planes use dedicated networks to communicate between different systems. Examples of dedicated
networks used in data planes include Ethernet and Wi-Fi networks, cellular networks, satellite
communications, virtualized networks, and dedicated networks used in industrial control systems or IoT.
These networks enable the data plane to deliver data to end users from systems and vice versa.
While both the Control Plane and Data Plane are integral to network management, they perform distinct
roles. The table below outlines some of the key differences between the two:
Determines how data should be managed, routed, and processed Responsible for moving packets from source to destination
Builds and maintains the IP routing table Forwards actual IP packets based on the Control Plane’s logic
Packets are processed by the router to update the routing table Forwards packets based on the built logic of the Control Plane
SDN Architecture consist of different components. Here, we will see all these SDN Architecture
Components and their duty one by one. What are these SDN Components? The basic SDN Terms, SDN
Components are:
Data Plane is consist of various Network devices both physical and Virtual. The main duty of data plane
is forwarding. In the previous traditional networks, both control and data plane was in the same device.
But with SDN, network devices has only data plane. So, the main role of these network devices is only
forwarding the data. This provide a very efficient Forwarding mechanism
.
SDN Architecture: SDN Controller (Control Plane)
SDN Controller is the Center of the SDN Architecture and the most important one of SDN
Architecture Components. In other words, SDN Controller is the brain of the system. The control of all
the data plane devices is done via SDN Controller. It also controls the Applications at Application
Layer. SDN Controller communicates and control these upper and lower layer with APIs through
Interfaces.
A northbound interface is an application programming interface (API) or protocol that allows a lower-
level network component to communicate with a higher-level or more central component, while --
conversely -- a southbound interface allows a higher-level component to send commands to lower-level
network components. Northbound and southbound interfaces are most associated with software-defined
networking (SDN), but can also be used in any system that uses a hub-and-spoke or controller-and-nodes
architecture.
North and south in this context can be thought of as on a map. The north is on the top and south on the
bottom of the diagram. The higher-level elements control the lower-level ones. Some designs also
have east-west interfaces for communication among peers.
It is important to note that in SDN, the northbound and southbound interfaces are for networking control
commands and APIs. The data or traffic carried by the network stays on the data layer and does not
traverse the northbound and southbound interfaces.
The northbound interface in SDN is the communication between the highest application layer and the
SDN controller at the middle control layer. The application layer consists of network
orchestration services, networking designer software, operator software or third-party applications that
make decisions about the overall structure of the network.
In SDN, the operator or orchestration software does not directly issue commands or configurations to the
network nodes. Instead, the operator uses the application layer to issue commands to the control layer
over the northbound interface.
The northbound interface is often a Representational State Transfer API, or REST API, exposed by the
SDN controller.
The southbound interface in SDN is the communication between the SDN controller at the middle control
layer and the lower networking elements at the data layer. The data layer consists of the physical or
virtual network switches and ports.
The SDN controller takes the desired state of the network and translates it into specific commands and
configurations that are then pushed to the network devices over the southbound interface.
Popular southbound interface standards are Simple Network Management Protocol, or SNMP; Open
Flow; and Open Shortest Path First, or OSPF.
Examples of northbound and southbound interfaces
An example use of the northbound and southbound interfaces involves a network engineer using network
orchestration software to define a specific data route. The orchestration software sends the instructions to
the SDN controller over the northbound interface. The SDN controller then sends the specific
configurations to the physical switches over the southbound interface.
A more detailed example is Microsoft Azure software load balancing. The network controller is at the
center layer and runs the software load balancer(SLB). The network operator sits at the application layer
and uses Windows Admin Center to set the desired state. Windows Admin Center uses PowerShell as the
northbound interface to send the commands to the SLB. The SLB then sends border gateway protocol
(BGP) updates as the southbound interface to the virtual routers on the data layer. If the SLB finds an
error in a router, it can automatically send the new configurations to the other routers through the
southbound interface BGP, and then send a notification through the northbound interface to alert the
operator of the issue.
The concept of Software Defined Networking (SDN) is about taking the routing control away from the
individual network elements, and putting it in the hands of a centralized control layer - for example
Nevins Video Path orchestration and SDN control system.
ITU-T METHOD
ITU's Telecommunication Standardardization Sector (ITU-T) plays a crucial role in defining the core
transport and access technologies that underpin communications networks around the world. Today's
advanced wireless, broadband and multimedia technologies are all powered by ITU standards.
Primary function The ITU-T mission is to ensure the efficient and timely production of standards
covering all fields of telecommunications and Information Communication Technology (ICTs) on a
worldwide basis, as well as defining tariff and accounting principles for international telecommunication
services.
The main products of ITU-T are Recommendations (ITU-T Recs) - standards defining how
telecommunication networks operate and interwork. These can be accessed through the links below. ITU-
T Recs have non-mandatory status until they are adopted in national laws.
The ITU Guidelines are a critical tool to assist policy makers and national regulatory authorities to
develop a clear, flexible and user-friendly national emergency telecommunications plan with a multi-
stakeholder approach.
INTRODUCTION
Open Daylight (ODL) [1], hosted by the Linux Foundation, is an open-source platform for network
programmability aimed at enhancing software-defined networking (SDN).The OpenDaylight controller is
JVM software and can be run from any operating system and hardware as long as it supports Java. The
controller is an implementation of the Software Defined Network (SDN) and continues to grow. And it
accepted to accelerate the adoption of SDN and network Functions Virtualization (NFV), OpenDaylight
provides an open platform for network programmability and cloud computing designed to enable SDN
and create a solid NFV foundation for all sizes of networks. In this application development, a set of
loosely coupled modules can be integrated into one large application. “Loosely coupled” means modules
are both independent and can communicate with one another. ODL architecture is developed based on the
Open Services Gateway Initiative (OSGi), a dynamic module system for java. It helps for modular
application development. ODL architecture is formed in a layered structure layer on the top, the platform
controller layer in the middle, and the network elements represent the lower layer. The ODL’s heart is the
middle layer which contains: the basic network functions such as topology, statistics, and forwarding
services; the platform network functions which include modules for specific networking tasks; as well as
the service abstraction layer which represents a service abstract level between the lower layer and the
upper layer and it also routes service between requests layer’s modules.
II. SOFTWARE DEFINED NETWORKING (SDN) Open software-defined networking (Open SDN) - the use
of standards-based protocols and open interfaces to abstract the network control plane from the data
plane. Open SDN enables unified control and programmability across a network of heterogeneous
physical and virtual networking devices [2]. Open SDN is based on a three-tier architecture of virtual
devices. Northbound open APIs for application developers• An open-core controller• Southbound
standards-based data plane communication protocols• A secured southbound (SB) interface is
established between each element of the underlying entire network and the controller, through which
the rules are for-warded. Networking elements store the rules in a chain of flow tables and when no
entry matches the packet header fields of a received packet in the flow tables, routers or switches send
it to the controller. If a matching rule is found, the defined action (drop, forward to a certain port) is
applied. If no matches found, the packet can be either dropped or sent to the controller[2
Open Daylight Helium components include a fully pluggable controller, interfaces, protocol plug-ins, and
applications. The Helium controller consists of three key blocks [4]: The Open Daylight controller
platform• northbound applications and services• Southbound plug-in and protocols
UNIT –V
. Benefits of Network Function Virtualization Network functions virtualization (NFV) are an emerging
theme within the telecoms industry, and over the past few years, it has become a catalyst for major
transformational changes in the network. Application of Network Functions Virtualization brings many
benefits to network operators, contributing to significant changes in the telecommunications industry.
Benefits include:- Reduced equipment costs and reduced power consumption through consolidating
equipment. Cost efficiency is a main driver of NFV. NFV allows to abstract underlying hardware, and
enables elasticity, scalability and automation. Improves the flexibility of network service provisioning
and reduce the time to deploy new services. Increased speed of deployment by minimizing the typical
network operator cycle of innovation. Economies of scale required to cover investments in hardware-
based functionalities are no longer applicable for software-based development, making feasible other
modes of feature evolution. Network Functions Virtualization will enable network operators to
significantly reduce the maturation cycle. The possibility of running production, test and reference
facilities on the same infrastructure provides much more efficient test and integration, reducing
development costs and time to market. Services can be rapidly scaled up/down as required. In addition,
speed of service deployment is improved by provisioning remotely in software without any site visits
required to install new hardware. Enabling a wide variety of eco-systems and encouraging openness. It
opens the virtual appliance market to pure software entrants, small players and academia,
encouraging more innovation to bring new services and new revenue streams quickly at much lower
risk.
Reduced energy consumption by exploiting power management features in standard servers and
storage, as well as workload consolidation and location optimization. For example, relying on
virtualization techniques, it would be possible to concentrate the workload on a smaller number of
servers during off-peak hours (e.g. overnight) so that all the other servers can be switched off or put into
an energy saving mode. Improved operational efficiency by taking advantage of the higher uniformity of
the physical network platform and its homogeneity to other support platforms
While SDN function involves separation of control and data; centralization of control and
programmability of network whereas NFV deals with transfers of network functions from dedicated
appliances to generic servers. Network Functions Virtualization is able to support SDN by providing the
infrastructure upon which the SDN software can be run. Furthermore, Network Functions Virtualization
aligns closely with the SDN objectives to use commodity servers and switches
Virtual Network Functions (VNFs) are virtualized network services running on open computing platforms
formerly carried out by proprietary, dedicated hardware technology. Common VNFs include virtualized
routers; firewalls, WAN optimization, and network address translation (NAT) services.
Virtual Network Functions (VNFs) are virtualized network services running on open computing platforms
formerly carried out by proprietary, dedicated hardware technology. Common VNFs include virtualized
routers; firewalls, WAN optimization, and network address translation (NAT) services. Most VNFs are
run in virtual machines (VMs) on common virtualization infrastructure software such as VMWare or
KVM.
VNFs can be linked together like building blocks in a process known as service chaining. Although the
concept is not new, service chaining—and the application provisioning process—is simplified and
shortened using VNF technology.
Broadcast Domain
• It is the area where all of the devices receive the same information or data at the same time.
• The size of the Broadcast Domain is directly proportional to the Broadcast traffic, which means the
larger the size of the Broadcast Domain larger will be the Broadcast traffic.
• This traffic is always an issue for all switches in the second-level network layer that is because of
the wastage of bandwidth and uncontrolled or unmanaged traffic.
So, in order to reduce this broadcast traffic or the size of the broadcast domain we use the Virtual Local
Area Network (VLAN).
What is VLAN?
• It is a collection of the same type of devices/departments in one or more local areas. Designed to
interact with each other through data links as they share the same physical location in the same
broadcast domain.
• VLANs are represented by a number which is called VLAN ID, which is different and unique.
• It is also possible to divide one large physical LAN into two smaller logical LANs. Sometimes,
the layout of the network equipment does not match the organization’s structure.
For example,
the engineering and finance departments of a company might have computers on the same physical LAN
because they are in the same wing of the building but it might be easier to manage the system if
engineering and finance logically each had its own network Virtual LAN or VLAN.
• VPN is a service that helps you stay private when you are online. it provides protection on
network connection while using public networks.
• VPNs use encryption techniques to encrypt your internet traffic data such as IP address and hide
your online identity over the internet. VPN makes a secure tunnel for your device to connect to
the internet.
• In order to use VPN you need to install the software-based technology known as the VPN client
on your device that would let you establish a secure connection.
• The VPN client connects to the Wi-Fi and then to the ISP(Internet Service Provider) here VPN
client encrypts your information/data by using VPN protocols, data is encrypted to make sure it is
secure.
• Next, the VPN client establishes a VPN tunnel within the public network that connects to the
VPN server the VPN tunnel protects your information from being intercepted by the hacker, and
your IP address and the actual location are changed at the VPN server to enable a private and
secure connection.
• Finally, the VPN server connects to your website server in the last step where the encrypted
message is decrypted, in this way your original IP address is hidden by the VPN, and the VPN
tunnel protects your data from being hacked.
• In this manner, your data is anonymous and secure when it passes through the public network
and that makes a difference between a normal connection and a VPN connection.
Even if you work remotely or are connected to public Wi-Fi using a VPN is always the safest option. In
addition to providing a secure encrypted data transfer VPNs are also used to disguise your whereabouts
and give you access to regional web content, VPN servers act as proxies on the internet this way your
actual location cannot be established, VPN enables you to spoof your location and switch to a server to
another country and thereby changed your location. Encryption is a pillar of VPNs.
Full Form Virtual Local Area Network (VLAN) Virtual Private Network (VPN)
Type of VLAN:
Type of VPN:
1. Port-based VLAN
Types 1. Remote Access VPN
2. Protocol-based VLAN
2. Site-to-Site VPN
3. MAC-based VLAN
hierarchical
VLAN is a subset of the VPN VPN is a superset of VLAN
structure
Despite being so many differences between VLAN and VPN, there are multiple similarities between
them,
• In terms of network scalability both VPN and VLAN allow multiple institutes and corporates to
maintain their webbing more effectively.
• VPN and VLAN both can be used to enhance privacy or security by encrypting the network
traffic.
• As far as route traffic is concerned both VPN and VLAN use IP addresses.
• Within the physical network layer VPN and VLAN both are used to create independent virtual
networks.
• Both are used in saving the cost of the different institutes and corporate by reducing the need for
physical network components
Conclusion:
VLAN and VPN both are chosen for security and privacy demand for the users. A VLAN is basically a
means to logically segregate or geographically distance apart networks without physically segregating
them with multiple switches. And the VPN on the other hand is used to connect two points in a secure and
secure tunnel or moreover encrypted way. VLAN is a subcategory of VPN and VPN is a means to create
a secure network for secure data transmission.
Ans: VLAN is very easy to implement, and low cost as compared to VPN and it is usually deployed at
the edge of the ISP network, ISP network means Internet Service provider
Ans: Media Access Control (MAC)-based VLAN, is used to help in mapping the ingress interface
Ans: Encryption and decryption are used in VPN to make the connection more secure and strong,
encryption is a way of converting a readable message to an unreadable message so that an unauthorized
person or body is not able to read that. Decryption is the way of converting an encrypted message back to
its original form.
An: There are multiple methods used in the encryption of messages or data, symmetric, asymmetric,
and hashing.
In this article, we’ve put together six examples of network function virtualization use cases that
demonstrate how NFV is being used today to address a range of challenges as well as provide enhanced
solutions to these and other networking hurdles in order to enhance services and reduce outgoings. So,
let’s jump straight in.
Network Virtualization
The main use with which NFV technologies are being used by mainly telecom companies around the
world is, of course, for network virtualization. As already discussed before, NFV separates the hardware
from the software. The process creates a virtual network on top of the physical network. This decoupling
of hardware and software allows service providers to expand and accelerate the development and
innovation of services. It also helps to improve critical network requirements such as provisioning. In
order to optimize their network services, consumers look to network virtualization to separate their
network functions such as DNS, caching, IDS, and firewalling from the proprietary hardware that was,
until recently, the dominant solutions. This solution also enables them to run on software
instead. Network Virtualization gives service providers the agility and flexibility they need when
rolling out new network services. It helps them reduce their spending on bulky physical hardware and
the costs associated with running, maintaining, and occasionally repairing it. One of the best examples of
network virtualization is the new MEC and SD-WAN. But, while network virtualization is one of the
most popular applications of network functions virtualization, it is by no means the only one.
Mobile edge computing is another technological innovation that has gone from strength to strength over
the past couple of years. This technology looks like it is only gaining momentum as we carry on into the
year 2020. While many people may ask, how do network function virtualization and mobile edge
computing relate to one another? How are they especially related? In fact, they are indeed intrinsically
linked and are actively influencing each other in both areas of development and the expansion of
applications. Using network function virtualization allows edge devices to perform computational
services and provide network functions by generating and utilizing either a single or multiple virtual
machines (VM). Multi-Access Edge Computing (MEC) is a clear example of these technologies. The
MEC is utilizing mobile edge computing to provide ultra-low latencies. This technology was born
from the ongoing rollouts of 5G networks. The MEC uses individual components in its architecture which
are similar to the NFV.
When it comes to “mobile”, edge computing refers to components such as radio towers, mini-data and
local data centers. NFV takes some of these mobile network service functions and translates them from
hardware to software. Network function virtualization, alongside other technological and network
advances and developments such as software-defined networking and artificial intelligence, will likely
become the prime solutions for the network challenges of tomorrow due to their early integration and
combination with each other.
Orchestration Engines
One of the most beneficial NFV use cases is that of orchestration engines. With traditional legacy
networks, issues such as low agility, human error, and lack of automatic processes and alerts made these
kinds of networks extremely limited in their capabilities. One of the biggest causes of network downtime
is human error. This is why automated systems are in such high demand. These systems also have the
benefit of reducing the costs associated with paying for maintenance and upkeep as they require markedly
less human intervention. NFV orchestration uses programming technology to manage the
connections between network functions and services between The orchestration handles the NFVi
and the VNFs. Centralized orchestration engines can prove to be an extremely worthwhile investment for
those willing to get started, however, when considering a centralized automation engine, the following
features are widely regarded as critical:
Video Analytics
Another technology that has seen a huge increase in its potential, since the inception of the Internet of
Things, is video analytics systems and software. Now, companies are able to capture massive amounts of
data using IoT video and smart devices installed in their factories, stores, offices, and even farms. But
most of the time, high-performance AI video analysis is performed only cloud-native applications or
powerful servers located on the cloud. So having to transfer these large amounts of data for analysis from
on-premises to the cloud becomes a real challenge. The modern network usually faces an end-to-end
network latency, which poses a real challenge for the apps and network services that are extremely
sensitive to network delays, such as video analytics. In order to solve this challenge, enterprises have
been turning to NFV and SDN architectures to reduce network resource utilization and improve
latency. These technologies could when combined with using video analytics at the network edge, reduce
bandwidth use by up to 90% according to some proposals. An example of this video analytics technology
is a device like the NVA-3000 from Lanner. This device is an Enterprise-grade NVR for video
surveillance/machine vision. Together with a low-latency network such as LTE or 5G, an NVR can gather
video from multiple input channels such as video surveillance; it buffers it, pre-processes it and sends it
over the network. The VNF provider sends network functions on-demand to the network edge.
With IoT, smart, and edge devices enabling more and more data to be generated, collected, and then
analyzed, video analytics systems and software have become and will continue to be an increasingly
important part of utilizing the Big Data now available. Network functions virtualization is the architecture
on which to build these systems.
Security
Just like the tools we use to farm our crops or manufacture our cars, the tools we use to protect our
physical and virtual tools have evolved thanks to the various leaps in technological progress that have
occurred over the last decade. Many security vendors are already offering virtual firewalls to protect
VMs. The F5 Gi Firewall VNF Service, for example, is one of the most popular NFV solutions that
encompass firewall capabilities. But in reality, firewalls are just one of nearly every security device or
component that will eventually be virtualized using network functions virtualization as well as software-
defined networking. One of the main attractions to using virtualized security is the idea of
centralized control mechanisms and equally distributed enforcement. These two benefits alone have
seen companies looking to bolster their security flock to investigate these kinds of security solutions.
Network Slicing
Network slicing has gained a lot of popularity since the beginning of 5G design and rollouts. This
technology aims to slice a physical network into multiple networks. NFV and Network slicing are two
concepts very related and it is likely that NFV will begin to play a crucial role in this slicing, especially
for 5G. Slicing the network is like creating sophisticated Virtual Private Networks (VPNs) and it can
consist of a mix of physical and virtual instances. This technology creates multiple logical network
instances throughout the same underlying physical network. Each instance or “slice” can be customized
and optimized for different functions and allocated to specific departments. The slice is often provided as
a VNF. The following Generic 5G Network Slicing framework proposed in “Network Slicing in 5G:
Survey and Challenges” a journal published in IEEE, is based on three layers the service, network
function, and infrastructure. The service layer (operator) pushes VNFs to the function layer that runs on
generic hardware on-premises. The Orchestrator deals with the slicing for all these three layers.
Since each slice of the network can be referred to as a network function, NFV will automatically allocate
the necessary resources to each network slice at the right QoS and performance level. Some benefits of
Network Slicing:
• Network slicing provides configurable and optimizes specific services.
• Adds flexibility, efficiency, and agility to the end-user.
• Reduces CapEx and OpEx.
• Improves deployment times for network services.
Final Words
Network function virtualization may not have quite conquered the world just yet, but, with the world
becoming increasingly virtual and NFV (as well as SDN) gaining more and more momentum, we could
soon see a day when virtual security, as well as the other use cases featured on this list, become the rule,
rather than the exception.
NFV MANO (network functions virtualization management and orchestration)
NFV MANO (network functions virtualization management and orchestration), also called MANO, is an
architectural framework for managing and orchestrating virtualized network functions (VNFs) and other
software components.
The European Telecommunications Standards Institute (ETSI) Industry Specification Group (ISG NFV)
defined the MANO architecture to facilitate the deployment and connection of services as they are
decoupled from dedicated physical devices and moved to virtual machines (VMs).
Because network components can be deployed in hours rather than months in virtual environments,
MANO can reduce confusion by managing and orchestrating resources that include compute, storage,
networking and virtual network functions like routing, firewalls and load balancing.
1. NFV orchestrators
2. VNF managers
NFV orchestrators consist of two layers -- service orchestration and resource orchestration -- which
control the integration of new network services and VNFs into a virtual framework. NFV orchestrators
also validate and authorize NFV infrastructure (NFVi) resource requests.
VIMs control and manage NFV infrastructure, which encompasses compute, storage, network resources.
MANO works with templates for standard VNFs so users can pick from existing NFVi resources to
deploy their NFV platform. For NFV MANO to be effective, it must be integrated with application
program interfaces (APIs) in existing systems in order to work with multivendor technologies across
multiple network domains. Telecommunications providers' operations and billing systems (OSS/BSS)
also need to interoperate with MANO.
One of the next steps needed in NFV MANO's evolution is to include SDN controllers in the MANO
architecture.
• Ability to integrate with legacy network architectures and link to existing operational and billing
systems.
NFV Architecture:
An individual proprietary hardware component, such as a router, switch, gateway, firewall, load balancer,
or intrusion detection system, performs a specific networking function in a typical network architecture. A
virtualized network substitutes software programs that operate on virtual machines for these pieces of
hardware to carry out networking operations.
• Applications: Software delivers many forms of network functionality by substituting for the
hardware elements of a conventional network design (virtualized network functions).