Computer Networks Module-1

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 100

s

Computer Networks Module-1

MODULE
Topic
1 & 2:Portion
Introduction Data Communications: Components, Representations, Data Flow,
Chapter 1 Networks: Physical Structures, Network Types: LAN, WAN, Switching,
Internet
Network Models Protocol Layering: Scenarios, Principles, Logical Connections.
Chapter 2 TCP/IP Protocol Suite: Layered Architecture, Layers in TCP/IP suite,
Description of layers
Encapsulation and Decapsulation, Addressing, Multiplexing and
Demultiplexing, The OSI Model: OSI Versus TCP/IP
Data-Link Layer Introduction: Nodes and Links, Services, Categories’ of link,
Chapter 11 Sublayers, Link Layer addressing: Types of addresses, ARP.
(MODULE-2) Data Link Control (DLC) services: Framing, Flow and Error Control,
Data Link Layer Protocols: Simple Protocol, Stop and Wait protocol,
Piggybacking

TEXT BOOK: Data Communications and Networking, B Forouzan, 5th Ed, McGrawHill
Education, 2016, ISBN: 1-25-906475-3

Page 1
Computer Networks Module-1

MODULE 1

Data communications
When we communicate, we are sharing information. This sharing
can be local or remote. Between individuals,

local communication usually occurs face to face, while remote


communication takes place over distance.

The term telecommunication, which includes telephony, telegraphy,


and television, means communication at a distance (tele is Greek
for “far”). The word data refers to information presented in
whatever form is agreed upon by the parties creating and using the
data.

Data communications are the exchange of data between two


devices via some form of transmission medium such as a wire
cable. For data communications to occur, the communicating
devices must be part of a communication system made up of a
combination of hardware (physical equipment) and software
(programs). The effectiveness of a data communications system
depends on four fundamental characteristics: delivery, accuracy,
timeliness, and jitter.

•Delivery-The system must deliver data to the correct destination. Data


must be received by the intended device or user and only by that device
or user.

•Accuracy- The system must deliver the data accurately. Data that
have been altered in transmission and left uncorrected are unusable.

•Timeliness. The system must deliver data in a timely manner. Data


delivered late are useless. In the case of video and audio, timely delivery
means delivering data as they are produced, in the same order that they
are produced, and without significant delay. This kind of delivery is
called real-time transmission.

•Jitter. Jitter refers to the variation in the packet arrival time. It is the
uneven delay in the delivery of audio or video packets. For example, let
us assume that video packets are sent every 30 ms. If some of the
packets arrive with 30-ms delay and others with 40-ms delay, an uneven
quality in the video is the result. Page 2
Computer Networks Module-1

Components
A data communications system has five
components

1.Message-. The message is the information (data) to be communicated. Popular forms


of information include text, numbers, pictures, audio, and video.

2. Sender-. The sender is the device that sends the data message. It can be a
computer, workstation, telephone handset, video camera, and so on.

3.Receiver-. The receiver is the device that receives the message. It can be a
computer, workstation, telephone handset, television, and so on

•Transmission medium-. The transmission medium is the physical path by which a message
travels from sender to receiver. Some examples of transmission media include twisted-
pair wire, coaxial cable, fiber-optic cable, and radio waves.

•Protocol-. A protocol is a set of rules that govern data communications. It represents


an agreement between the communicating devices. Without a protocol, two devices
may be connected but not communicating, just as a person speaking French cannot be
understood by a person who speaks only Japanese.

Data Representation

Information today comes in different forms such as text, numbers, images, audio, and video.

Text -In data communications, text is represented as a bit pattern, a sequence of bits (0s or
1s). Different sets of bit patterns have been designed to represent text symbols. Each set is
called a code, and the process of representing symbols is called coding. Today, the prevalent
coding system is called Unicode, which uses 32 bits to represent a symbol or character used in
any language in the world. The American Standard Code for Information Interchange
(ASCII), developed some decades ago in the United States, now constitutes the first 127
characters in Unicode and is also referred to as Basic Latin.

Page 3
Computer Networks Module-1

Numbers- are also represented by bit patterns. However, a code such as


ASCII is not used to represent numbers; the number is directly
converted to a binary number to simplify mathematical
operations. Appendix B discusses several different numbering systems.
Images Images- are also represented by bit patterns. In its simplest
form, an image is composed of a matrix of pixels (picture elements),
where each pixel is a small dot. The size of the pixel depends on
the resolution.

For example, an image can be divided into 1000 pixels or 10,000 pixels.
In the second case, there is a better representation of the image (better
resolution), but more memory is needed to store the image. After an
image is divided into pixels, each pixel is assigned a bit pattern. The size
and the value of the pattern depend on the image. For an image made
of only black and- white dots (e.g., a chessboard), a 1-bit pattern is
enough to represent a pixel. If an image is not made of pure white and
pure black pixels, we can increase the size of the bit pattern to include
gray scale. For example, to show four levels of gray scale, we can use 2-
bit patterns. A black pixel can be represented by 00, a dark gray pixel by
01, a light gray pixel by 10, and a white pixel by 11. There are several
methods to represent color images. One method is called RGB, so
called because each color is made of a combination of three primary
colors: red, green, and blue. The intensity of each color is measured,
and a bit pattern is assigned toit. Another method is called YCM, in
which a color is made of a combination of three other primary colors:
yellow, cyan, and magenta.

Audio- Audio refers to the recording or broadcasting of sound or


music. Audio is by nature different from text, numbers, or images. It is
continuous, not discrete. Even when we use a microphone to change
voice or music to an electric signal, we create a continuous signal.

Video- Video refers to the recording or broadcasting of a picture or


movie. Video can either be produced as a continuous entity (e.g., by a
TV camera), or it can be a combination of images, each a discrete entity,
arranged to convey the idea of motion.

Data Flow

Communication between two devices can be simplex, half-duplex, or


full-duplex as shown in Figure
Page 4
simplex mode- the communication is unidirectional, as on a one-way
street. Only one of the two devices on a link can transmit; the other can
only receive (see Figure a).

example-Keyboards and traditional monitors are examples of


simplex devices. The keyboard can only introduce input; the monitor
Computer Networks Module-
1

fig: data flow

Half-Duplex- In half-duplex mode, each station can both transmit and receive, but not at the
same time. When one device is sending, the other can only receive, and vice versa (see Figure
b). The half-duplex mode is like a one-lane road with traffic allowed in both directions. When
cars are traveling in one direction, cars going the other way must wait.

example-Walkie-talkies and CB (citizens band) radios are both half-duplex systems.

The half-duplex mode is used in cases where there is no need for communication in
both directions at the same time; the entire capacity of the channel can be utilized for
each direction.

Full-Duplex- In full-duplex mode (also called duplex), both stations can transmit and receive
simultaneously (see Figure c). The full-duplex mode is like a two-way street with traffic flowing
in both directions at the same time. In full-duplex mode, signals going in one direction share
the capacity of the link with signals going in the other direction.

This sharing can occur in two ways: Either the link must contain two physically
separate transmission paths, one for sending and the other for receiving; or the capacity of the
channel is divided between signals traveling in both directions.

NETWORKS
A network is the interconnection of a set of devices capable of communication. a device can be
a host (or an end system as it is sometimes called) such as a large computer, desktop, laptop,
workstation, cellular phone, or security system.

Page 5
Computer Networks Module-1

A device can also be a connecting device such as a router, which


connects the network to other networks, a switch, which connects
devices together, a modem (modulator-demodulator), which changes
the form of data, and so on.

These devices in a network are connected using wired or wireless


transmission media such as cable or air. When we connect two
computers at home using a plug-and-play router, we have created a
network, although very small.

Network Criteria

A network must be able to meet a certain number of criteria. The most


important of these are performance, reliability, and security.

Performance- Performance can be measured in many ways, including


transit time and response time. Transit time is the amount of time
required for a message to travel from one device to another. Response
time is the elapsed time between an inquiry and a response.

The performance of a network depends on a number of factors,


including the number of users, the type of transmission medium, the
capabilities of the connected hardware, and the efficiency of the
software

Performance is often evaluated by two networking metrics: throughput


and delay.

Reliability- network reliability is measured by the frequency of failure,


the time it takes a link to recover from a failure, and the network’s
robustness in a catastrophe.

Security- security issues include protecting data from unauthorized


access, protecting data from damage and development, and
implementing policies and procedures for recovery from breaches and
data losses.

Physical Structures

Network attributes - Type of Connection and physical topology

Type of Connection

A network is two or more devices connected through links. A link is a


communications pathway that transfers data from one device Page 6to
another.

There are two possible types of connections:

Point-to-Point -A point-to-point connection provides a dedicated


link between two devices. The entire capacity of the link is reserved
Computer Networks Module-1

point-to-point connections use an actual length of wire or cable to


connect the two ends, but other options, such as microwave or satellite
links, are also possible (see Figure a).

example-When we change television channels by infrared remote


control, we are establishing a point-to-point connection between the
remote control and the television’s control system.

Multipoint A multipoint (also called multidrop) connection is one


in which more than two specific devices share a single link (see Figure
b).

fig: Type of connection

a multipoint environment, the capacity of the channel is shared, either spatially or temporally.
If several devices can use the link simultaneously, it is a spatially shared connection. If users
must take turns, it is a timeshared connection.

Physical Topology

The term physical topology refers to the way in which a network is laid out physically. Two or
more devices connect to a link; two or more links form a topology. The topology of a network
is the geometric representation of the relationship of all the links and linking devices
(usually called nodes) to one another.

There are four basic topologies possible: mesh, star, bus, and ring.

Page 7
Computer Networks Module-1

Mesh Topology- In a mesh topology, every device has a dedicated


point-to-point link to every other device. The term dedicated means
that the link carries traffic only between the two devices it
connects.

We need n (n – 1) physical links in a fully connected mesh


network with n nodes. if each physical link allows communication
in both directions (duplex mode),we need n (n – 1) / 2 duplex-
mode links.

To accommodate that many links, every device on the network must


have n – 1 input/output (I/O) ports (see Figure ) to be connected to the
other n – 1 stations.

Advantages-

•Use of dedicated links guarantees that each connection can carry


its own data load, thus eliminating the traffic problems that can occur
when links must be shared by multiple devices.

•A mesh topology is robust. If one link becomes unusable, it does not


incapacitate the entire system.

•Privacy or security. When every message travels along a dedicated line,


only the intended recipient sees it. Physical boundaries prevent other
users from gaining access to messages.

•Point-to-point links make fault identification and fault isolation easy.


Traffic can be routed to avoid links with suspected problems. This
facility enables the network manager to discover the precise location of
the fault and aids in finding its cause and solution.

A fully connected mesh topology (five devices)

example- of a mesh topology is the connection of telephone regional offices in which


each regional office needs to be connected to every other regional office.

Page 8
Computer Networks Module-1

Star Topology

In a star topology, each device has a dedicated point-to-point link only


to a central controller, usually called a hub.

The devices are not directly linked to one another.A star topology does
not allow direct traffic between devices. The controller acts as an
exchange: If one device wants to send data to another, it sends the
data to the controller, which then relays the data to the other
connected device

A star topology connecting four stations Advantages


1.A star topology is less expensive than a mesh topology.

•Each device needs only one link and one I/O port to connect . This factor makes it easy to
install and reconfigure. Far less cabling needs to be housed, and additions, moves, and
deletions involve only one connection: between that device and the hub.

1.robust. If one link fails, only that link is affected. All other links remain active. This factor also
lends itself to easy fault identification and fault isolation. As long as the hub is working, it can
be used to monitor link problems and bypass defective links.

Disadvantage

•The dependency of the whole topology on one single point, the hub. If the hub goes down,
the whole system is dead.

1.more cabling is required in a star than in some other topologies (such as ring or bus).

The star topology is used in local-area networks (LANs), High-speed LANs often use a
star topology with a central hub.

Page 9
Computer Networks Module-1

Bus Topology

A bus topology, is multipoint. One long cable acts as a backbone to link


all the devices in a network

Nodes are connected to the bus cable by drop lines and taps. A drop line is a
connection running between the device and the main cable.

A tap is a connector that either splices into the main cable or punctures the sheathing of a
cable to create a contact with the metallic core.

As a signal travels along the backbone, some of its energy is transformed into heat. Therefore,
it becomes weaker and weaker as it travels farther and farther. For this reason there is a limit
on the number of taps a bus can support and on the distance between those taps.

Advantages

•Easy to install.

•bus uses less cabling than mesh or star topologies. Only the backbone cable stretches
through the entire facility. Each drop line has to reach only as far as the nearest point on the
backbone.

Disadvantages

•Difficult reconnection and fault isolation.

•Difficult to add new devices.

•Signal reflection at the taps can cause degradation in quality. This degradation can be
controlled by limiting the number and spacing of devices connected to the given length of the
cable

•Adding new devices require modification or replacement of the backbone.

•a fault or break in the bus cable stops all transmission. The damaged area reflects signals back
in the direction of origin, creating noise in both directions.

Page 10
Computer Networks Module-1

Bus topology was the one of the first topologies used in the design of
early local area networks. Traditional Ethernet LANs can use a bus
topology, but they are less popular now .

Ring Topology

In a ring topology, each device has a dedicated point-to-point


connection with only the two devices on either side of it. A signal is
passed along the ring in one direction, from device to device, until it
reaches its destination.

Each device in the ring incorporates a repeater. When a device receives


a signal intended for another device, its repeater regenerates the bits
and passes them along

fig: A ring topology connecting six stations

Advantages

1.A ring is relatively easy to install and reconfigure. Each device is linked to only its immediate
neighbors (either physically or logically). To add or delete a device requires changing only two
connections.

•Fault isolation is simplified.

Generally, in a ring a signal is circulating at all times. If one device does not receive a signal
within a specified period, it can issue an alarm. The alarm alerts the network operator to the
problem and its location.

Disadvantage

In a simple ring, a break in the ring (such as a disabled station) can disable the entire network.
This weakness can be solved by using a dual ring or a switch capable of closing off the break.

Ring topology was prevalent when IBM introduced its local-area network, Token Ring. Today,
the need for higher-speed LANs has made this topology less popular.

Page 11
Computer Networks Module-
1

NETWORK TYPES
Different types of networks

Local Area Network


(LAN)-

fig: An isolated LAN in the past and


today
A local area network (LAN) is usually privately owned and connects some hosts a single
in office, building, or campus. Depending on the needs of an organization,

A LAN can be as simple as two PCs and a printer in someone’s home office, or it can extend
throughout a company and include audio and video devices.

Each host in a LAN has an identifier, an address, that uniquely defines the host in the LAN. A
packet sent by a host to another host carries both the source host’s and the destination host’s
addresses.

Page 12
Computer Networks Module-1

In the past, all hosts in a network were connected through a common


cable, which meant that a packet sent from one host to another was
received by all hosts. The intended recipient kept the packet; the others
dropped the packet.

Today, most LANs use a smart connecting switch, which is able to


recognize the destination address of the packet and guide the packet to
its destination without sending it to all other hosts. The switch
alleviates the traffic in the LAN and allows more than one pair to
communicate with each other at the same time if there is no common
source and destination among them.

Wide Area Network


A wide area network (WAN) is also an interconnection of devices
capable of communication. However, it. We see two distinct examples
of WANs today: point-to-point WANs and switched WANs.

Point-to-Point WAN

A point-to-point WAN is a network that connects two


communicating devices through a transmission media (cable or air).

fig: Point-to-Point
WAN

Switched WAN

A switched WAN is a network with more than two ends. A switched WAN, is used in
the backbone of global communication today. We can say that a switched WAN is a
combination of several point-to-point WANs that are connected by switches..

Page 13
Computer Networks Module-1

fig: A switched

WAN LAN VS WAN


1. A LAN is normally limited in size, spanning 1. A WAN has a wider geographical span,
an office, a building, or a campus. spanning a town, a state, a country, or
even the world
2. A LAN interconnects
hosts 2. WAN interconnects connecting devices such
3. A LAN is normally privately owned by as switches, routers, or modems
the organization that uses it
3. a WAN is normally created and run by
communication companies and leased by
an organization that uses it.

Internetwork

Today, it is very rare to see a LAN or a WAN in isolation; they are connected to one another.
When two or more networks are connected, they make an internetwork, or internet.

example- Assume that an organization has two offices, one on the east coast and the other on
the west coast. Each office has a LAN that allows all employees in the office to communicate
with each other. To make the communication between employees at different offices possible,
the management leases a point-to-point dedicated WAN from a service provider, such
as a telephone company, and connects the two LANs. Now the company has an internetwork,
or a private internet (with lowercase i). Communication between offices is now possible.

RNSIT,ECE Page 14
Computer Networks

fig: An network made of two LAN and point-to-point dedicated WAN

When a host in the west coast office sends a message to another host in the same office, the
router blocks the message, but the switch directs the message to the destination. On the other
hand, when a host on the west coast sends a message to a host on the east coast, router R1
routes the packet to router R2, and the packet reaches the destination. Figure shows another
internet with several LANs and WANs connected. One of the WANs is a switched WAN with
four switches.

fig: A heterogeneous network made of four WANs and three


LANs

Page 15
Computer Networks

Switching
An internet is a switched network in which a switch
connects at least two links together. A switch needs to forward
data from a network to another network when required. The two
most common types of switched networks are circuit-switched
and packet-switched networks.

Circuit-Switched Network

In a circuit-switched network, a dedicated connection, called


a circuit, is always available between the two end systems;
the switch can only make it active or inactive(continuous
communication between two telephone). FIG shows a very
simple switched network that connects four telephones to each
end. We have used telephone sets instead of computers as an
end system because circuit switching was very common in
telephone networks in the past,

fig : Circuit-Switched Network

The thick line connecting two switches is a high-capacity communication line that can handle
four voice communications at the same time; the capacity can be shared between all pairs of
telephone sets. The switches used in this example have forwarding tasks but no storing
capability.

Let us look at two cases.

In the first case, all telephone sets are busy; four people at one site are talking with
four people at the other site; the capacity of the thick line is fully used.

In the second case, only one telephone set at one side is connected to a telephone set at the
other side; only one-fourth of the capacity of the thick line is used. This means that a circuit-
switched network is efficient only when it is working at its full capacity; most of the time, it is
inefficient because it is working at partial capacity.

Page 16
Computer Networks

The reason to make the capacity of the thick line four times the
capacity of each voice line is that we do not want communication
to fail when all telephone sets at one side want to be connected
with all telephone sets at the other side

Packet-Switched Network

In a computer network, the communication between the two


computers is done in blocks of data called packets

This allows switches to function for both storing and


forwarding because a packet is an independent entity that
can be stored and sent later. Fig shows a small packet-
switched network that connects four computers at one site to
four computers at the other site.

fig: Packet-Switched Network

A router in a packet-switched network has a queue that can store and forward the packet.

Example- Now assume that the capacity of the thick line is only twice the capacity of the data
line connecting the computers to the routers.

If only two computers (one at each site) need to communicate with each other, there is no
waiting for the packets. However, if packets arrive at one router when the thick line is already
working at its full capacity, the packets should be stored and forwarded in the order
they arrived. The two simple examples show that a packet-switched network is more efficient
than a circuit switched network, but the packets may encounter some delays.

The Internet
An internet (note the lowercase i) is two or more networks that can communicate with each
other. The most notable internet is called the Internet (uppercase I ), and is composed
of thousands of interconnected networks.

Page 17
Computer Networks

Figure 1.15 shows a conceptual (not geographical) view of the


Internet. The figure shows the Internet as several backbones,
provider networks, and customer networks. At the top level, the
backbones are large networks owned by some
communication companies such as Sprint, Verizon (MCI),
AT&T, and NTT. The backbone networks are connected through
some complex switching systems, called peering points. At the
second level, there are smaller networks, called provider
networks, that use the services of the backbones for a fee. The
provider networks are connected to backbones and sometimes
to other provider networks.

fig: The internet today

The customer networks are networks at the edge of the Internet that actually use the services
provided by the Internet. They pay fees to provider networks for receiving services. Backbones
and provider networks are also called Internet Service Providers (ISPs). The backbones are
often referred to as international ISPs; the provider networks are often referred to as
national or regional ISP

Page 18
Computer Networks

Network Models
Protocol Layering
Protocol defines the rules that both the sender and receiver and
all intermediate devices need to follow to be able to
communicate effectively.

When communication is simple, we may need only one


simple protocol; when the communication is complex,
we may need to divide the task between different layers, in
which case we need a protocol at each layer, or protocol
layering.

Let us develop two simple scenarios to better understand the


need for protocol layering.

Scenarios

First Scenario

In the first scenario, communication is so simple that it can occur


in only one layer. Assume Maria and Ann are neighbors with
a lot of common ideas. Communication between Maria and
Ann takes place in one layer, face to face, in the same
language, as shown in Figure

fig: single layer protocol

Second Scenario

In the second scenario, we assume that Ann is offered a higher-level position in her company,
but needs to move to another branch located in a city very far from Maria. The two friends still
want to continue their communication and exchange ideas because they have come up with an
innovative project to start a new business when they both retire. They decide to continue their
conversation using regular mail through the post office. However, they do not want their ideas
to be revealed by other people if the letters are intercepted. They agree on
an encryption/decryption technique. The sender of the letter encrypts it to make it unreadable
by an intruder; the receiver of the letter decrypts it to get the original letter.

Page 19
Computer Networks

Now we can say that the communication between Maria and


Ann takes place in three layers, as shown in Figure . We assume
that Ann and Maria each have three machines (or robots) that
can perform the task at each layer.

fig: A three layer protocol

Assume that Maria sends the first letter to Ann. Maria talks to the machine at the third layer as
though the machine is Ann and is listening to her. The third layer machine listens to what
Maria says and creates the plaintext (a letter in English), which is passed to the second layer
machine.

The second layer machine takes the plaintext, encrypts it, and creates the ciphertext, which is
passed to the first layer machine. The first layer machine, presumably a robot, takes
the ciphertext, puts it in an envelope, adds the sender and receiver addresses, and mails it

At Ann’s side, the first layer machine picks up the letter from Ann’s mail box, recognizing the
letter from Maria by the sender address. The machine takes out the ciphertext from
the envelope and delivers it to the second layer machine. The second layer machine decrypts
the message, creates the plaintext, and passes the plaintext to the third-layer machine. The
third layer machine takes the plaintext and reads it as though Maria is speaking.

Need for protocol layering

1) Protocol layering enables us to divide a complex task into several smaller and simpler tasks.
Page 20
Computer Networks

For example, from fig, we could have used only one


machine to do the job of all three machines. However, if
the encryption/ decryption done by the machine is not
enough to protect their secrecy, they would have to change the
whole machine. In the present situation, they need to change
only the second layer machine; the other two can remain the
same. This is referred to as modularity. Modularity in this case
means independent layers.

2) A layer (module) can be defined as a black box with inputs and


outputs, without concern about how inputs are changed to
outputs. If two machines provide the same outputs when given
the same inputs, they can replace each other.

For example, Ann and Maria can buy the second layer
machine from two different manufacturers. As long as the
two machines create the same ciphertext from the same
plaintext and vice versa, they do the job.

advantages

•Protocol layering allows to separate the services from the


implementation. Lower layer give the services to the upper layer;
we don’t care about how the layer is implemented.

For example, Maria may decide not to buy the machine (robot)
for the first layer; she can do the job herself. As long as Maria
can do the tasks provided by the first layer, in both directions,
the communication system works.

•Protocol layering in the Internet, is that communication does


not always use only two end systems; there are intermediate
systems that need only some layers, but not all layers. If we did
not use protocol layering, we would have to make each
intermediate system as complex as the end systems, which
makes the whole system more expensive.

Principles of Protocol Layering

First Principle The first principle dictates that if we want


bidirectional communication, we need to make each layer so that
it is able to perform two opposite tasks, one in each direction.
Page 21
For example, the third layer task is to listen (in one direction) and
talk (in the other direction). The second layer needs to be able to
encrypt and decrypt. The first layer needs to send and receive
mail.

Second Principle The second principle that we need to follow in


protocol layering is that the
Computer Networks

For example, the object under layer 3 at both sites should


be a plaintext letter. The object under layer 2 at both sites
should be a cipher text letter. The object under layer 1 at both
sites should be a piece of mail.

Logical Connections
After following the above two principles, we can think about
logical connection between each layer as shown in Figure . This
means that we have layer-to-layer communication. Maria and
Ann can think that there is a logical (imaginary) connection at
each layer through which they can send the object created from
that layer. We will see that the concept of logical connection will
help us better understand the task of layering we encounter in
data communication and networking.

fig: Logical connection between peer layer

TCP/IP PROTOCOL SUITE


TCP/IP is a protocol suite (a set of protocols organized in different layers) used in the Internet
today. It is a hierarchical protocol made up of interactive modules, each of which provides a
specific functionality. The term hierarchical means that each upper level protocol is supported
by the services provided by one or more lower level protocols. The original TCP/IP protocol
suite was defined as four software layers built upon the hardware. Today, however, TCP/IP is
thought of as a five-layer model. Figure shows both configurations.

Layered Architecture
To show how the layers in the TCP/IP protocol suite are involved in communication between
two hosts, we assume that we want to use the suite in a small internet made up of three LANs

Page 22
Computer Networks

(links), each with a link-layer switch. We also assume that


the links are connected by one router, as shown in Figure

fig: layers in TCP/IP protocol


suite

fig : Communication through an


internet

Page 23
Computer Networks

Assume that computer A communicates with computer


B. As the figure shows, five communicating devices in this
communication: source host (computer A), the link-layer switch
in link 1, the router, the link-layer switch in link 2, and the
destination host (computer B).

The source host needs to create a message in the application


layer and send it down the layers so that it is physically sent to
the destination host. The destination host needs to receive the
communication at the physical layer and then deliver it
through the other layers to the application layer

The router is involved in only three layers; there is no transport


or application layer in a router. Although a router is always
involved in one network layer, it is involved in n combinations of
link and physical layers in which n is the number of links the
router is connected to. The reason is that each link may use its
own data-link or physical protocol.

For example, in the above figure, the router is involved in three


links, but the message sent from source A to destination B is
involved in two links. Each link may be using different link- layer
and physical-layer protocols; the router needs to receive a
packet from link 1 based on one pair of protocols and deliver it
to link 2 based on another pair of protocols.

A link-layer switch in a link, however, is involved only in


two layers, data-link and physical. Although each switch in the
above figure has two different connections, the connections are
in the same link, which uses only one set of protocols. This
means that, unlike a router, a link- layer switch is involved only in
one data-link and one physical layer.

Layers in the TCP/IP Protocol Suite

To better understand the duties of each layer, we need to think


about the logical connections between layers.

Page 24
Computer Networks

fig: Figure shows logical connections in our simple internet.

Using logical connections makes it easier to think about the duty


of each layer. As the figure shows, the duty of the application,
transport, and network layers is end-to-end. However, the duty
of the data-link and physical layers is hop-to-hop, in which a hop
is a host or router.

In other words, the domain of duty of the top three layers is the
internet, and the domain of duty of the two lower layers is the
link.

Another way of thinking of the logical connections is to think


about the data unit created from each layer. In the top three
layers, the data unit (packets) should not be changed by any
router or link-layer switch. In the bottom two layers, the packet
created by the host is changed only by the routers, not by the
link-layer switches.

Fig shows the second principle discussed previously for protocol


layering. We show the identical objects below each layer related
to each device.

fig: identical objects in the TCP/IP protocol suite

Note that, although the logical connection at the network layer is between the two hosts, we
can only say that identical objects exist between two hops in this case because a router may
fragment the packet at the network layer and send more packets than received .Note that the
link between two hops does not change the object.

Page 25
Computer Networks

Description of Each Layer


Physical Layer

Physical layer is responsible for carrying individual bits in a frame


across the link. Although the physical layer is the lowest level in
the TCP/IP protocol suite, the communication between two
devices at the physical layer is still a logical communication
because there is another, hidden layer, the transmission media,
under the physical layer.

Two devices are connected by a transmission medium (cable or


air). Transmission medium does not carry bits, it carries electrical
or optical signals. So the bits received in a frame from the data-
link layer are transformed and sent through the transmission
media, but we can think that the logical unit between two
physical layers in two devices is a bit. There are several protocols
that transform a bit to a signal.

The physical layer of TCP/IP describes hardware standards such


as IEEE 802.3, the specification for Ethernet network media, and
RS-232, the specification for standard pin connectors.

The following are the main responsibilities of the physical


layer

Definition of Hardware Specifications, Encoding and


Signaling, Data Transmission and Reception,Topology and
Physical Network Design

Data-link Layer

Internet is made up of several links (LANs and WANs) connected


by routers. The data-link layer is responsible for taking the
datagram and moving it across the link.(node to node
communication)

The link can be a wired LAN with a link-layer switch, a wireless


LAN, a wired WAN, or a wireless WAN. We can also have
different protocols used with any link type.

In each case, the data-link layer is responsible for moving the


packet through the link. TCP/IP does not define any specific
Pageand
protocol for the data-link layer. It supports all the standard 26
proprietary protocols. The data-link layer takes a datagram
and encapsulates it in a packet called a frame.

Each link-layer protocol provide a different service like framing,


Flow control, Error control and congestion control.
Computer Networks

Network Layer

The network layer is responsible for creating a connection


between the source computer and the destination computer.
The communication at the network layer is host-to-host.
However, since there can be several routers from the source to
the destination, the routers in the path are responsible for
choosing the best route for each packet.

The network layer is responsible packetizing and routing and


forwarding the packet through possible routes. others services
are error and flow control, congestion control.

The network layer in the Internet includes the main


protocol, Internet Protocol (IP), that defines the format of the
packet, called a datagram at the network layer. IP also defines
the format and the structure of addresses used in this layer.

IP is also responsible for routing a packet from its source to its


destination, which is achieved by each router forwarding the
datagram to the next router in its path.

IP is a connectionless protocol that provides no flow


control, no error control, and no congestion control services.
This means that if any of these services is required for an
application, the application should rely only on the transport-
layer protocol.

The network layer also includes unicast (one-to-one) and


multicast (one-to-many) routing protocols. A routing protocol
does not take part in routing (it is the responsibility of IP), but it
creates forwarding tables for routers to help them in the routing
process. The network layer also has some auxiliary protocols that
help IP in its delivery and routing tasks.

The Internet Control Message Protocol (ICMP) helps IP to report


some problems when routing a packet. The Internet Group
Management Protocol (IGMP) is another protocol that helps IP in
multitasking. The Dynamic Host Configuration Protocol (DHCP)
helps IP to get the network-layer address for a host. The Address
Resolution Protocol (ARP) is a protocol that helps IP to find the
link-layer address of a host or a router when its network-layer
address is given. Page 27

Transport Layer

The logical connection at the transport layer is also end-to-


end. The transport layer at the source host gets the message
from the application layer, encapsulates it in a transport layer
Computer Networks

There are more than one protocol in the transport layer, which
means that each application program can use the protocol that
best matches its requirement. There are a few transport- layer
protocols in the Internet, each designed for some specific task.

The main protocol, Transmission Control Protocol (TCP), is a


connection-oriented protocol that first establishes a logical
connection between transport layers at two hosts before
transferring data. It creates a logical pipe between two TCPs for
transferring a stream of bytes. TCP provides flow control
(matching the sending data rate of the source host with the
receiving data rate of the destination host to prevent
overwhelming the destination), error control (to guarantee that
the segments arrive at the destination without error and
resending the corrupted ones), and congestion control to reduce
the loss of segments due to congestion in the network.

User Datagram Protocol (UDP), is a connectionless protocol


that transmits user datagrams without first creating a logical
connection. In UDP, each user datagram is an independent entity
without being related to the previous or the next one (the
meaning of the term connectionless). UDP is a simple protocol
that does not provide flow, error, or congestion control.

Its simplicity, which means small overhead, is attractive to an


application program that needs to send short messages and
cannot afford the retransmission of the packets involved in TCP,
when a packet is corrupted or lost.

A new protocol, Stream Control Transmission Protocol (SCTP) is


designed to respond to new applications that are emerging in
the multimedia.

Application Layer

As Figure shows, the logical connection between the two


application layers is end to-end. The two application layers
exchange messages between each other as though there were a
bridge between the two layers. However, communication is done
through all the layers.

Communication at the application layer is between two


processes (two programs running at this layer). PageTo28
communicate, a process sends a request to the other
process and receives a response. Process-to-process
communication is the duty of the application layer.

The application layer in the Internet includes many predefined


protocols.
Computer Networks
•The File Transfer Protocol (FTP) is used for transferring files
from one host to another.

3)The Terminal Network (TELNET) and Secure Shell (SSH) are


used for accessing a site remotely.

4)The Simple Network Management Protocol (SNMP) is used by


an administrator to manage the Internet at global and local
levels.

5)The Domain Name System (DNS) is used by other protocols to


find the network-layer address of a computer.

•The Internet Group Management Protocol (IGMP) is used to


One of the important concepts in protocol inlayering
collect membership a group. in Internet is
the decapsulation. Figure shows this concept for the small encapsulation/
internet Encapsulation and Decapsulation

fig: encapsulation/ decapsulation

We have not shown the layers for the link-layer switches because no
encapsulation/ decapsulation occurs in this device. Figure show the encapsulation in the
source host, decapsulation in the destination host, and encapsulation and decapsulation in the
router.

Encapsulation at the Source Host

At the source, we have only encapsulation.

Page 29
Computer Networks

1.At the application layer, the data to be exchanged is referred


to as a message. A message normally does not contain any
header or trailer, but if it does, we refer to the whole as the
message. The message is passed to the transport layer.

•The transport layer takes the message as the payload, the


load that the transport layer should take care of. It adds
the transport layer header to the payload, which contains
the identifiers of the source and destination application
programs that want to communicate plus some more
information that is needed for the end-to end delivery of the
message, such as information needed for flow, error control, or
congestion control. The result is the transport- layer packet,
which is called the segment (in TCP) and the user datagram
(in UDP). The transport layer then passes the packet to the
network layer.

1.The network layer takes the transport-layer packet as data


or payload and adds its own header to the payload. The
header contains the addresses of the source and destination
hosts and some more information used for error checking of the
header, fragmentation information, and so on. The result is the
network-layer packet, called a datagram. The network layer then
passes the packet to the data-link layer.

2.The data-link layer takes the network-layer packet as data


or payload and adds its own header, which contains the link-
layer addresses of the host or the next hop (the router). The
result is the link-layer packet, which is called a frame. The frame
is passed to the physical layer for transmission.

Decapsulation and Encapsulation at the Router


At the router, we have both decapsulation and encapsulation
because the router is connected to two or more links.

1.After the set of bits are delivered to the data-link layer, this
layer decapsulates the datagram from the frame and passes it to
the network layer.

2.The network layer only inspects the source and


destination addresses in the datagram header and consults its
Page 30
forwarding table to find the next hop to which the datagram is to
be delivered. The contents of the datagram should not be
changed by the network layer in the router unless there is a need
to fragment the datagram if it is too big to be passed through the
next link. The datagram is then passed to the data-link layer of
the next link.
Computer Networks

Decapsulation at the Destination Host

At the destination host, each layer only decapsulates the packet


received, removes the payload, and delivers the payload to
the next-higher layer protocol until the message reaches the
application layer. It is necessary to say that decapsulation in the
host involves error checking

Addressing
we have logical communication between pairs of layers in this
model. Any communication that involves two parties needs two
addresses: source address and destination address. Although it
looks as if we need five pairs of addresses, one pair per
layer, we normally have only four because the physical layer
does not need addresses; the unit of data exchange at the
physical layer is a bit, which definitely cannot have an address.

Figure 2.9 shows the addressing at each layer. At the application


layer, we normally use names to define the site that provides
services, such as someorg.com, or the e-mail address, such as
somebody@coldmail.com.

fig: Addressing in the TCP/IP Protocol suite

At the transport layer, addresses are called port numbers, and these define the application-
layer programs at the source and destination. Port numbers are local addresses that
distinguish between several programs running at the same time.

At the network-layer, the addresses are global, with the whole Internet as the scope.
A network-layer address uniquely defines the connection of a device to the Internet.

Page 31
Computer Networks

The link-layer addresses, sometimes called MAC addresses, are


locally defined addresses, each of which defines a specific host or
router in a network (LAN or WAN).

Multiplexing and Demultiplexing


TCP/IP protocol suite uses several protocols at some layers, we
have multiplexing at the source and demultiplexing at the
destination.

Multiplexing means that a protocol at a layer can


encapsulate a packet from several next- higher layer protocols
(one at a time); demultiplexing means that a protocol can
decapsulate and deliver a packet to several next-higher layer
protocols (one at a time). Figure shows the concept of
multiplexing and demultiplexing at the three upper layers.

fig: multiplexing and demultiplexing

To be able to multiplex and demultiplex, a protocol needs to have a field in its


header to identify to which protocol the encapsulated packets belong.

At the transport layer, either UDP or TCP can accept a message from several application-layer
protocols.

At the network layer, IP can accept a segment from TCP or a user datagram from UDP. IP can
also accept a packet from other protocols such as ICMP, IGMP, and so on.

At the data-link layer, a frame may carry the payload coming from IP or other protocols such as
ARP .

THE OSI MODEL


An ISO standard that covers all aspects of network communications is the Open
Systems Interconnection (OSI) model. It was first introduced in the late 1970s. An open system
is a set of

Page 32
Computer Networks

protocols that allows any two different systems to communicate


regardless of their underlying architecture.

The purpose of the OSI model is to show how to facilitate


communication between different systems without requiring
changes to the logic of the underlying hardware and software.
The OSI model is not a protocol; it is a model for
understanding and designing a network architecture that is
flexible, robust, and interoperable. The OSI model was intended
to be the basis for the creation of the protocols in the OSI stack.
The OSI model is a layered framework for the design of network
systems that allows communication between all types of
computer systems.

It consists of seven separate but related layers, each of which


defines a part of the process of moving information across a
network

fig: OSI model OSI versus TCP/IP


When we compare the two models, we find that two layers, session and presentation,
are missing from the TCP/IP protocol suite. These two layers were not added to the TCP/IP
protocol suite after the publication of the OSI model. The application layer in the suite
is usually considered to be the combination of three layers in the OSI model, as shown in
Figure .

Two reasons were mentioned for this decision. First, TCP/IP has more than one transport-layer
protocol. Some of the functionalities of the session layer are available in some of the
transport- layer protocols.

Page 33
Computer Networks

Second, the application layer is not only one piece of


software. Many applications can be developed at this layer.
If some of the functionalities mentioned in the session
and presentation layers are needed for a particular
application, they can be included in the development of that
piece of software.

fig:TCP/IP and OSI


model

Page 34
Computer Networks

MODULE-2 DATA-LINK LAYER


INTRODUCTION
The Internet is a combination of networks glued together
by connecting devices (routers or switches). If a packet is to
travel from a host to another host, it needs to pass through
these networks.. Communication at the data-link layer is made
up of five separate logical connections between the data-link
layers in the path.
Figure 9.1

fig

fig: communication at the data-link layer

The data-link layer at Alice’s computer communicates with the data-link layer at router R2. The
data-link layer at router R2 communicates with the data-link layer at router R4,

and so on. Finally, the data-link layer at router R7 communicates with the data-link layer at
Bob’s computer. Only one data-link layer is involved at the source or the destination, but two

Page 35
Computer Networks
data-link layers are involved at each router. The reason is that
Alice’s and Bob’s computers are

Page 36
Computer Networks

each connected to a single network, but each router takes input


from one network and sends output to another network. Note
that although switches are also involved in the data-link-layer
communication, for simplicity we have not shown them in the
figure.

Nodes and Links

Communication at the data-link layer is node-to-node. A


data unit from one point in the Internet needs to pass
through many networks (LANs and WANs) to reach another
point. Theses LANs and WANs are connected by routers. It is
customary to refer to the two end hosts and the routers as
nodes and the networks in between as links. Figure shows
a simple representation of links and nodes when the path of the
data unit is only six nodes.

fig: links and nodes

The first node is the source host; the last node is the destination host. The other four nodes
are four routers. The first, the third, and the fifth links represent the three LANs; the second
and the fourth links represent the two WANs.

Services

The data-link layer is located between the physical and the network layers. The data link layer
provides services to the network layer; it receives services from the physical layer.

Services provided by the data-link layer.

The duty scope of the data-link layer is node-to-node. When a packet is travelling in
the Internet, the data-link layer of a node (host or router) is responsible for delivering a
datagram to the next node in the path. For this purpose, the data-link layer of the sending
node needs to encapsulate the datagram received from the network in a frame, and the data-
link layer of the receiving node needs to decapsulate the datagram from the frame. In other
words, the data-
Page 37
Computer Networks

link layer of the source host needs only to encapsulate, the data-
link layer of the destination host needs to decapsulate, but
each intermediate node needs to both encapsulate and
decapsulate.

One may ask why we need encapsulation and decapsulation at


each intermediate node. The reason is that each link may be
using a different protocol with a different frame format. Even if
one link and the next are using the same protocol, encapsulation
and decapsulation are needed because the link-layer addresses
are normally different.

Analogy :-may help in this case. Assume a person needs to travel


from her home to her friend’s home in another city. The traveller
can use three transportation tools. She can take a taxi to go to
the train station in her own city, then travel on the train from
her own city to the city where her friend lives, and finally reach
her friend’s home using another taxi. Here we have a source
node, a destination node, and two intermediate nodes. The
traveller needs to get into the taxi at the source node, get out of
the taxi and get into the train at the first intermediate node
(train station in the city where she lives), get out of the train and
get into another taxi at the second intermediate node (train
station in the city where her friend lives), and finally get out of
the taxi when she arrives at her destination. A kind of
encapsulation occurs at the source node, encapsulation and
decapsulation occur at the intermediate nodes, and
decapsulation occurs at the destination node. Our traveller is the
same, but she uses three transporting tools to reach the
destination. Figure shows the encapsulation and decapsulation at
the data-link layer.

For simplicity, we have assumed that we have only one


router between the source and destination. The datagram
received by the data-link layer of the source host is encapsulated
in a frame. The frame is logically transported from the source
host to the router. The frame is decapsulated at the data-link
layer of the router and encapsulated at another frame. The new
frame is logically transported from the router to the destination
host. Note that, although we have shown only two data-link Page 38
layers at the router, the router actually has three data-link layers
because it is connected to three physical links.
Computer Networks

fig: communication with only three nodes

Framing

The first service provided by the data-link layer is framing. The data-link layer at each node
needs to encapsulate the datagram (packet received from the network layer) in a frame before
sending it to the next node. The node also needs to decapsulate the datagram from the frame
received on the logical channel. Although we have shown only a header for a frame, frame
may have both a header and a trailer. Different data-link layers have different formats for
framing.

Flow Control

If the rate of produced frames is higher than the rate of consumed frames, frames at
the receiving end need to be buffered while waiting to be consumed (processed). Definitely,
we cannot have an unlimited buffer size at the receiving side. We have two choices.

The first choice is to let the receiving data-link layer drop the frames if its buffer is full. The
second choice is to let the receiving data-link layer send a feedback to the sending data-link
layer to ask it to stop or slow down.

Different data-link-layer protocols use different strategies for flow control. Since flow control
also occurs at the transport layer, with a higher degree of importance.

Error Control

At the sending node, a frame in a data-link layer needs to be changed to bits, transformed to
electromagnetic signals, and transmitted through the transmission media. At the
receiving node, electromagnetic signals are received, transformed to bits, and put together to
create a frame. Since electromagnetic signals are susceptible to error, a frame is susceptible to
error. Page 39
Computer Networks

The error needs first to be detected. After detection, it needs to


be either corrected at the receiver node or discarded and
retransmitted by the sending node. Since error detection and
correction is an issue in every layer (node-to node or host-to-
host).

Congestion Control

Although a link may be congested with frames, which may result


in frame loss, most data-link- layer protocols do not directly use
a congestion control to alleviate congestion, although some
wide-area networks do. In general, congestion control is
considered an issue in the network layer or the transport layer
because of its end-to-end nature.

Two Categories of Links

In a point-to-point link, the link is dedicated to the two devices;


in a broadcast link, the link is shared between several pairs of
devices. For example, when two friends use the traditional
home phones to chat, they are using a point-to-point link; when
the same two friends use their cellular phones, they are using a
broadcast link (the air is shared among many cell phone users).

Two Sublayers

The data-link layer is divided into two sublayers: data link


control (DLC) and media access control (MAC). LAN protocols
actually use the same strategy. The data link control sublayer
deals with all issues common to both point-to-point and
broadcast links; the media access control sublayer deals only
with issues specific to broadcast links. In other words, we
separate these two types of links at the data-link layer, as shown
in fig
fig: dividing the data link layer into two sublayers

LINK-LAYER ADDRESSING
In a connectionless internetwork such as the Internet we cannot make a datagram reach its
destination using only IP addresses. The reason is that each datagram in the Internet, from the
same source host to the same destination host, may take a different path. The source
and
Page 40
Computer Networks

destination IP addresses define the two ends but cannot


define which links the datagram should pass through. We
need to remember that the IP addresses in a datagram should
not be changed. If the destination IP address in a datagram
changes, the packet never reaches its destination; if the
source IP address in a datagram changes, the destination host or
a router can never communicate with the source if a response
needs to be sent back or an error needs to be reported back to
the source (ICMP ).

The above discussion shows that we need another addressing


mechanism in a connectionless internetwork: the link-layer
addresses of the two nodes. A link-layer address is
sometimes called a link address, sometimes a physical address,
and sometimes a MAC address. Since a link is controlled at the
data-link layer, the addresses need to belong to the data-link
layer. When a datagram passes from the network layer to the
data-link layer, the datagram will be encapsulated in a frame and
two data-link addresses are added to the frame header. These
two addresses are changed every time the frame moves
from one link to another. Figure demonstrates the concept in
a small internet.

fig: IP address and link layer addresses in a small internet

above Figure shows three links and two routers and also have only two hosts: Alice (source)
and Bob (destination). For each host, two addresses, the IP addresses (N) and the link-layer
addresses (L) are shown. Note that a router has as many pairs of addresses as the number of
links the router is connected to. We have shown three frames, one in each link. Each frame

Page 41
Computer Networks

carries the same datagram with the same source and destination
addresses (N1 and N8), but the link-layer addresses of the frame
change from link to link.

In link 1, the link-layer addresses are L1 and L2. In link 2, they are
L4 and L5. In link 3, they are L7 and L8. Note that the IP
addresses and the link-layer addresses are not in the same order.
For IP addresses, the source address comes before the
destination address; for link-layer addresses, the destination
address comes before the source. The datagrams and frames are
designed in this way, and we follow the design. We may raise
several questions:

❑The IP address of a router does not appear in any


datagram sent from a source to a destination, why do we
need to assign IP addresses to routers? The answer is that in
some protocols a router may act as a sender or receiver of
a datagram. For example, in routing protocols a router is a
sender or a receiver of a message. The communications in
these protocols are between routers.

❑Why do we need more than one IP address in a router, one for


each interface? The answer is that an interface is a connection of
a router to a link. We will see that an IP address defines a point
in the Internet at which a device is connected. A router with n
interfaces is connected to the Internet at n points. This is the
situation of a house at the corner of a street with two gates;
each gate has the address related to the corresponding street.

❑How are the source and destination IP addresses in a packet


determined? The answer is that the host should know its own IP
address, which becomes the source IP address in the packet. the
application layer uses the services of DNS to find the destination
address of the packet and passes it to the network layer to be
inserted in the packet.

❑How are the source and destination link-layer addresses


determined for each link? Again, each hop (router or host)
should know its own link-layer address, The destination link-
layer address is determined by using the Address Resolution
Protocol.
Page 42
❑What is the size of link-layer addresses? The answer is that it
depends on the protocol used by the link. Although we have only
one IP protocol for the whole Internet, we may be using different
data-link protocols in different links.

different link-layer protocols.


Computer Networks

Unicast Address: Each host or each interface of a router is


assigned a unicast address. Unicasting means one-to-one
communication. A frame with a unicast address destination
is destined only for one entity in the link.

Example: The unicast link-layer addresses in the most common


LAN, Ethernet, are 48 bits (six bytes) that are presented as 12
hexadecimal digits separated by colons; for example, the
following is a link-layer address of a computer.

A2:34:45:11:92:F1

Multicast Address: Some link-layer protocols define multicast


addresses. Multicasting means one-to-many communication.
However, the jurisdiction is local (inside the link).

Example: the multicast link-layer addresses in the most common


LAN, Ethernet, are 48 bits (six bytes) that are presented as 12
hexadecimal digits separated by colons. The second digit,
however, needs to be an even number in hexadecimal. The
following shows a multicast address: A3:34:45:11:92:F1

Broadcast Address: Some link-layer protocols define a broadcast


address. Broadcasting means one-to-all communication. A frame
with a destination broadcast address is sent to all entities in the
link.

Example : the broadcast link-layer addresses in the most


common LAN, Ethernet, are 48 bits, all 1s, that are presented as
12 hexadecimal digits separated by colons. The following
shows a broadcast address: FF:FF:FF:FF:FF:FF

Address Resolution Protocol (ARP)


Anytime a node has an IP datagram to send to another node in a
link, it has the IP address of the receiving node. The source host
knows the IP address of the default router. Each router except
the last one in the path gets the IP address of the next router by
using its forwarding table. The last router knows the IP address
of the destination host. However, the IP address of the next
node is not helpful in moving a frame through a link; we need
the link-layer address of the next node. This is the time when the
Address Resolution Protocol (ARP) becomes helpful. Page 43

The ARP protocol is one of the auxiliary protocols defined in the


network layer, as shown in Figure . It belongs to the network
layer, it maps an IP address to a logical-link address. ARP accepts
an IP address from the IP protocol, maps the address to the
corresponding link-layer address, and passes it to the data-link
Computer Networks

fig: Position of ARP in TCP/IP protocol suite

Anytime a host or a router needs to find the link-layer address of another host or router in its
network, it sends an ARP request packet. The packet includes the link-layer and IP addresses of
the sender and the IP address of the receiver. Because the sender does not know the link-layer
address of the receiver, the query is broadcast over the link using the link-layer
broadcast address.

fig: ARP operation

Page 44
Computer Networks

Every host or router on the network receives and processes the


ARP request packet, but only the intended recipient recognizes
its IP address and sends back an ARP response packet. The
response packet contains the recipient’s IP and link-layer
addresses.

The packet is unicast directly to the node that sent the request
packet. In Figure (a) the system on the left (A) has a packet that
needs to be delivered to another system (B) with IP address N2.
System A needs to pass the packet to its data-link layer for the
actual delivery, but it does not know the physical address of
the recipient. It uses the services of ARP by asking the ARP
protocol to send a broadcast ARP request packet to ask for the
physical address of a system with an IP address of N2. This
packet is received by every system on the physical network, but
only system B will answer it, as shown in Figure (b). System B
sends an ARP reply packet that includes its physical address. Now
system A can send all the packets it has for this destination using
the physical address it received.

Caching

Let us assume that there are 20 systems connected to the


network (link): system A, system B, and 18 other systems. We
also assume that system A has 10 datagrams to send to system B
in one second.

•Without using ARP, system A needs to send 10 broadcast


frames. Each of the 18 other systems need to receive the
frames, decapsulate the frames, remove the datagram and pass
it to their network-layer to find out the datagrams do not belong
to them.This means processing and discarding 180 broadcast
frames.

•Using ARP, system A needs to send only one broadcast frame.


Each of the 18 other systems need to receive the frames,
decapsulate the frames, remove the ARP message and pass the
message to their ARP protocol to find that the frame must be
discarded. This means processing and discarding only 18 (instead
of 180) broadcast frames. After system B responds with its own
data-link address, system A can store the link-layer address in its
cache memory. The rest of the nine frames are only Page unicast.
45
Since processing broadcast frames is expensive (time
consuming), the first method is preferable.

Packet Format

Figure shows the format of an ARP packet.


Computer Networks

The destination hardware address and destination protocol


address fields define the receiver link-layer and network-layer
addresses.

An ARP packet is encapsulated directly into a data-link frame.


The frame needs to have a field to show that the payload
belongs to the ARP and not to the network-layer datagram

fig: ARP packet

Example : A host with IP address N1 and MAC address L1 has a packet to send to another host
with IP address N2 and physical address L2 (which is unknown to the first host). The two hosts
are on the same network. Figure shows the ARP request and response messages

Page 46
Computer Networks

DATA LINK CONTROL

DLC SERVICES
The data link control (DLC) deals with procedures for
communication between two adjacent nodes—node-to-node
communication—no matter whether the link is dedicated or
broadcast. Data link control functions include framing and flow
and error control.

Framing Data

Transmission in the physical layer means moving bits in the form


of a signal from the source to the destination. The physical layer
provides bit synchronization to ensure that the sender and
receiver use the same bit durations and timing.

The data-link layer, on the other hand, needs to pack bits into
frames, so that each frame is distinguishable from another.

Framing in the data-link layer separates a message from one


source to a destination by adding a sender address and a
destination address. The destination address defines where the
packet is to go; the sender address helps the recipient
acknowledge the receipt.

Although the whole message could be packed in one frame, that


is not normally done. One reason is that a frame can be very
large, making flow and error control very inefficient. When a
message is carried in one very large frame, even a single-
bit error would require the retransmission of the whole frame.
When a message is divided into smaller frames, a single-bit error
affects only that small frame.

Frame Size

Frames can be of fixed or variable size.

Fixed-size framing: there is no need for defining the boundaries


of the frames; the size itself can be used as a delimiter. An
example of this type of framing is the (Asynchronous transfer
mode)ATM WAN, which uses frames of fixed size called cells.
Page 47
Variable-size framing: prevalent in local-area networks. In
variable-size framing, we need a way to define the end of one
frame and the beginning of the next.

Two approaches were used for this purpose: a character-


oriented approach and a bit-oriented approach.
Computer Networks

Character-Oriented Framing

In character-oriented (or byte-oriented) framing, data to be


carried are 8-bit characters from a coding system such as ASCII.
The header, which normally carries the source and destination
addresses and other control information, and the trailer,
which carries error detection redundant bits, are also multiples
of 8 bits.

To separate one frame from the next, an 8-bit (1-byte) flag is


added at the beginning and the end of a frame. The flag,
composed of protocol-dependent special characters, signals the
start or end of a frame. Figure shows the format of a frame in a
character-oriented protocol.

fig: A frame in a character-oriented protocol

Character-oriented framing was popular when only text was exchanged by the data-link layers.
The flag could be selected to be any character not used for text communication. Now,
however, we send other types of information such as graphs, audio, and video;

Any character used for the flag could also be part of the information. If this happens,
the receiver, when it encounters this pattern in the middle of the data, thinks it has reached
the end of the frame.

To fix this problem, a byte-stuffing strategy was added to character-oriented framing. In byte
stuffing (or character stuffing), a special byte is added to the data section of the frame when
there is a character with the same pattern as the flag. The data section is stuffed with an extra
byte. This byte is usually called the escape character (ESC) and has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and

Page 48
Computer Networks

treats the next character as data, not as a delimiting flag.


Figure shows the situation.

fig: Byte stuffing and unstuffing

Byte stuffing by the escape character allows the presence of the flag in the data section of the
frame, but it creates another problem. What happens if the text contains one or more escape
characters followed by a byte with the same pattern as the flag?

The receiver removes the escape character, but keeps the next byte, which is
incorrectly interpreted as the end of the frame. To solve this problem, the escape characters
that are part of the text must also be marked by another escape character. In other words, if
the escape character is part of the text, an extra one is added to show that the second one is
part of the text.

Character-oriented protocols present another problem in data communications. The universal


coding systems in use today, such as Unicode, have 16-bit and 32-bit characters that conflict
with 8-bit characters. We can say that, in general, the tendency is moving toward the
bit- oriented protocols that we discuss next.

Bit-Oriented Framing

In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by


the upper layer as text, graphic, audio, video, and so on. However, in addition to headers (and
possible trailers), we still need a delimiter to separate one frame from the other.
Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the
beginning and the end of the frame, as
shown in fig

Page 49
Computer Networks

fig: A frame in bit-oriented protocol

If the flag pattern appears in the data, need to somehow inform


the receiver that this is not the end of the frame. We do this by
stuffing 1 single bit (instead of 1 byte) to prevent the pattern
from looking like a flag. The strategy is called bit stuffing.
In bit stuffing, if a 0 and five consecutive 1 bits are
encountered, an extra 0 is added. This extra stuffed bit is
eventually removed from the data by the receiver. Note that the
extra bit is added after one 0 followed by five 1s regardless of
the value of the next bit. This guarantees that the flag field
sequence does not inadvertently appear in the frame.

Bit stuffing is the process of adding one extra 0 whenever five


consecutive 1s follow a 0 in the data, so that the receiver does
not mistake the pattern 0111110 for a flag.

Figure shows bit stuffing at the sender and bit removal at the
receiver. Note that even if we have a 0 after five 1s, we still stuff
a 0. The 0 will be removed by the receiver. This means that if the
flag like pattern 01111110 appears in the data, it will change to
011111010 (stuffed) and is not mistaken for a flag by the
receiver. The real flag 01111110 is not stuffed by the sender and
is recognized by the receiver.

fig: Bit stuffing and unstuffing

Flow and Error Control


One of the responsibilities of the data-link control sublayer is flow and error control at the
data- link layer.

Page 50
Computer Networks

Flow Control Whenever an entity produces items and another


entity consumes them, there should be a balance between
production and consumption rates. If the items are
produced faster than they can be consumed, the consumer can
be overwhelmed and may need to discard some items. If the
items are produced more slowly than they can be consumed, the
consumer must wait, and the system becomes less efficient. Flow
control is related to the first issue. We need to prevent losing the
data items at the consumer site.

fig: Flow control at the data link layer

The figure shows that the data-link layer at the sending node tries to push frames toward the
data-link layer at the receiving node. If the receiving node cannot process and deliver
the packet to its network at the same rate that the frames arrive, it becomes overwhelmed
with frames. Flow control in this case can be feedback from the receiving node to the sending
node to stop or slow down pushing frames.

Buffers

Although flow control can be implemented in several ways, one of the solutions is normally to
use two buffers; one at the sending data-link layer and the other at the receiving
data-link layer. A buffer is a set of memory locations that can hold packets at the sender and
receiver. The flow control communication can occur by sending signals from the
consumer to the producer. When the buffer of the receiving data-link layer is full, it informs
the sending data- link layer to stop pushing frames.

Error Control

Since the underlying technology at the physical layer is not fully reliable, we need to
implement error control at the data-link layer to prevent the receiving node from
delivering corrupted packets to its network layer. Error control at the data-link layer is
normally very simple and implemented using one of the following two methods. In both
methods, a CRC is added to the frame header by the sender and checked by the receiver.

❑In the first method, if the frame is corrupted, it is silently discarded; if it is not corrupted, the
packet is delivered to the network layer. This method is used mostly in wired LANs such as
Ethernet.

Page 51
Computer Networks

❑In the second method, if the frame is corrupted, it is silently


discarded; if it is not corrupted, an acknowledgment is sent (for
the purpose of both flow and error control) to the sender.

Combination of Flow and Error Control

Flow and error control can be combined. In a simple situation,


the acknowledgment that is sent for flow control can also be
used for error control to tell the sender the packet has arrived
uncorrupted. The lack of acknowledgment means that there is a
problem in the sent frame. We show this situation when we
discuss some simple protocols in the next section. A frame that
carries an acknowledgment is normally called an ACK to
distinguish it from the data frame.

Connectionless and Connection-Oriented


A DLC protocol can be either connectionless or connection-
oriented..

Connectionless Protocol

In a connectionless protocol, frames are sent from one


node to the next without any relationship between the
frames; each frame is independent. Note that the term
connectionless here does not mean that there is no physical
connection (transmission medium) between the nodes; it means
that there is no connection between frames. The frames are not
numbered and there is no sense of ordering. Most of the
data-link protocols for LANs are connectionless protocols.

Connection-Oriented Protocol

In a connection-oriented protocol, a logical connection should


first be established between the two nodes (setup phase).
After all frames that are somehow related to each other
are transmitted (transfer phase), the logical connection is
terminated (teardown phase). In this type of communication,
the frames are numbered and sent in order. If they are
not received in order, the receiver needs to wait until all frames
belonging
An FSM is thought of as a machine with atofinite
the same setofare
number received
states. and then
The figure deliver
shows them
a machine
in order to the network layer. Connection oriented protocols
are rare in wired LANs, but we can see them in some point-to-Page 52
point protocols, some wireless LANs, and some WANs.

DATA-LINK LAYER PROTOCOLS


Traditionally four protocols have been defined for the data-
link layer to deal with flow and error control: Simple, Stop-
Computer Networks

with three states. There are only three possible events and three
possible actions. The machine starts in state I. If event 1 occurs,
the machine performs actions 1 and 2 and moves to state II.
When the machine is in state II, two events may occur. If event 1
occurs, the machine performs action 3 and remains in the same
state, state II. If event 3 occurs, the machine performs no action,
but move to state I.

fig: Connectionless and connection oriented service represented as FSMs

Simple Protocol
First protocol is a simple protocol with neither flow nor error control. We assume that
the receiver can immediately handle any frame it receives. In other words, the receiver can
never be overwhelmed with incoming frames. Figure shows the layout for this protocol.

fig: Simple Protocol

The data-link layer at the sender gets a packet from its network layer, makes a frame out of it,
and sends the frame. The data-link layer at the receiver receives a frame from the link, extracts
the packet from the frame, and delivers the packet to its network layer. The data-link layers of
the sender and receiver provide transmission services for their network layers.

FSMs The sender site should not send a frame until its network layer has a message to send.
The receiver site cannot deliver a message to its network layer until a frame arrives. We can
show these requirements using two FSMs. Each FSM has only one state, the ready state.

Page 53
Computer Networks

sending machine remains in the ready state until a request


comes from the process in the network layer. When this
event occurs, the sending machine encapsulates the message in
a frame and sends it to the receiving machine.

The receiving machine remains in the ready state until a


Frame arrives from the sending machine. When this event
occurs, the receiving machine decapsulates the message out of
the frame and delivers it to the process at the network layer.
Figure shows the FSMs for the simple protocol.

fig: FSMs for the simple protocol

Example

Flow diagram shows an example of communication using this protocol. It is very simple. The
sender sends frames one after another without even thinking about the receiver.

fig: Flow diagram for above


example

Page 54
Computer Networks

Stop-and-Wait Protocol
Our second protocol is called the Stop-and-Wait protocol,
which uses both flow and error control.

In this protocol, the sender sends one frame at a time and waits
for an acknowledgment before sending the next one. To detect
corrupted frames, we need to add a CRC to each data frame.
When a frame arrives at the receiver site, it is checked. If
its CRC is incorrect, the frame is corrupted and silently
discarded. The silence of the receiver is a signal for the sender
that a frame was either corrupted or lost.

Every time the sender sends a frame, it starts a timer. If an


acknowledgment arrives before the timer expires, the timer is
stopped and the sender sends the next frame (if it has one to
send). If the timer expires, the sender resends the previous
frame, assuming that the frame was either lost or corrupted.
This means that the sender needs to keep a copy of the
frame until its acknowledgment arrives.

When the corresponding acknowledgment arrives, the sender


discards the copy and sends the next frame if it is ready. Figure
shows the outline for the Stop-and-Wait protocol. Note that only
one frame and one acknowledgment can be in the channels at
any time.

fig: Stop and Wait


protocol

Page 55
Computer Networks

fig: FSMs for Stop-and-Wait protocol.

FSMs Figure shows the FSMs for primitive Stop-and-Wait protocol. We describe the sender and

receiver states below.


Sender States. The sender is initially in the ready state, but it can move between the ready and
blocking state.

Ready State. When the sender is in this state, it is only waiting for a packet from the network
layer. If a packet comes from the network layer, the sender creates a frame, saves a copy of
the frame, starts the only timer and sends the frame. The sender then moves to the blocking
state.

❑Blocking State. When the sender is in this state, three events can occur:

•If a time-out occurs, the sender resends the saved copy of the frame and restarts the timer.

•If a corrupted ACK arrives, it is discarded.

•If an error-free ACK arrives, the sender stops the timer and discards the saved copy of the
frame. It then moves to the ready state.

Page 56
Computer Networks

Receiver

The receiver is always in the ready state. Two events may occur:

•If an error-free frame arrives, the message in the frame is


delivered to the network layer and an ACK is sent.

•If a corrupted frame arrives, the frame is discarded

Example

Flow diagram for this example is shown below.

The first frame is sent and acknowledged. The second frame is


sent, but lost. After time-out, it is resent. The third frame is sent
and acknowledged, but the acknowledgment is lost. The frame is
resent. However, there is a problem with this scheme. The
network layer at the receiver site receives two copies of the third
packet, which is not right. In the next section, we will see how
we can correct this problem using sequence numbers and
acknowledgment numbers.

Sequence and Acknowledgment Numbers

Problem in above Example needs to be addressed and corrected.


Duplicate packets, as much as corrupted packets, need to be
avoided. we need to add sequence numbers to the data frames
and acknowledgment numbers to the ACK frames. However,
numbering in this case is very simple. Sequence numbers are
0, 1, 0, 1, 0, 1, . . . ; the acknowledgment numbers can also be 1,
0, 1, 0, 1, 0, … In other words, the sequence numbers
start with 0, the acknowledgment
numbers start with 1. An acknowledgment number always
defines the sequence number of the next frame to receive.

Page 57
Computer Networks

fig: flow diagram for example

example

Figure below shows how adding sequence numbers and acknowledgment numbers can
prevent duplicates. The first frame is sent and acknowledged. The second frame is sent, but
lost. After time-out, it is resent. The third frame is sent and acknowledged, but the
acknowledgment is lost. The frame is resent

FSMs with Sequence and Acknowledgment Numbers

We can change the FSM in Figure(FSM for stop and wait protocol) to include the sequence
and acknowledgment numbers.

Page 58
Computer Networks

Piggybacking

The two protocols discussed in this section are designed for unidirectional communication, in which
data is flowing only in one direction although the acknowledgment may travel in the other direction.
Protocols have been designed in the past to allow data to flow in both directions. However, to make
the communication more efficient, the data in one direction is piggybacked with the acknowledgment
in the other direction. In other words, when node A is sending data to node B, Node A also
acknowledges the data received from node B. Because piggybacking makes communication at the
datalink layer more complicated, it is not a common practice.

Page 59
Computer Networks

MODULE 2
Media Access Control (MAC)

When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.
Many protocols have been devised to handle access to a shared link. All of these
protocols belong to a sub layer in the data-link layer called media access control (MAC).

RANDOM ACCESS
In random-access or contention methods, no station is superior to another station and
none is assigned control over another.
At each instance, a station that has data to send uses a procedure defined by the protocol
to make a decision on whether or not to send.
This decision depends on the state of the medium (idle or busy). In other words,
each station can transmit when it desires on the condition that it follows the
predefined procedure, including testing the state of the medium.
Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods
are called random access. Second, no rules specify which station should send next.
Stations compete with one another to access the medium. That is why these
methods are also called contention methods.

Page 1
Computer Networks

In a random-access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send, there is an
access conflict “collision” and the frames will be either destroyed or modified.
The random-access methods have evolved from a very interesting protocol known as
ALOHA, which used a very simple procedure called multiple access (MA).
The method was improved with the addition of a procedure that forces the station to sense
the medium before transmitting. This was called carrier sense multiple access (CSMA).
CSMA method later evolved into two parallel methods: carrier sense multiple access
with collision detection (CSMA/CD), which tells the station what to do when a collision is
detected, and carrier sense multiple access with collision avoidance (CSMA/CA),
which tries to avoid the collision.

ALOHA
ALOHA, the earliest random access method, was developed at the University of Hawaii in
early 1970. It was designed for a radio (wireless) LAN, but it can be used on any
shared medium.
The medium is shared between the stations. When a station sends data, another station may
attempt to do so at the same time. The data from the two stations collide and become garbled.

Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple but elegant protocol.
The idea is that each station sends a frame whenever it has a frame to send (multiple access)
there is only one channel to share, there is the possibility of collision between frames from
different stations.

Figure 1 below shows an example of frame collisions in pure ALOHA

Page 2
Computer Networks

Figure 1: Frames in pure ALOHA network


There are four stations (unrealistic assumption) that contend with one
another for access to the shared channel. The above figure shows that
each station sends two frames, there are a total of eight frames on the
shared medium.
Some of these frames collide because multiple frames are in
contention for the shared channel. Figure 1 shows that only two
frames survive: one frame from station 1 and one frame from station 3.
If one bit of a frame coexists on the channel with one bit from another
frame, there is a collision and both will be destroyed. It is obvious that
the frames have to be resend that have been destroyed during
transmission.
The pure ALOHA protocol relies on acknowledgments from the
receiver. When a station sends a frame, it expects the receiver to send
an acknowledgment. If the acknowledgment does not arrive after a
time-out period, the station assumes that the frame (or the
acknowledgment) has been destroyed and resends the frame.
A collision involves two or more stations. If all these stations try to
resend their frames after the time-out, the frames will collide again.
Pure ALOHA dictates that when the time-out period passes, each
station waits a random amount of time before resending its frame.
The randomness will help avoid more collisions. This time is called
as the back off time TB.
Pure ALOHA has a second method to prevent congesting the channel
Page 3
with retransmitted frames. After a maximum number of retransmission
attempts Kmax, a station must give up and try later. Figure 1
shows the procedure for pure ALOHA based on the above
Computer Networks

The time-out period is equal to the maximum possible round-trip propagation delay,
which is twice the amount of time required to send a frame between the two most widely
separated stations (2 × Tp).
The backoff time TB is a random value that normally depends on K (the number of
attempted unsuccessful transmissions).
In this method, for each retransmission, a multiplier R = 0 to 2 K is randomly chosen and
multiplied by Tp (maximum propagation time) or Tfr (the average time required to send out a
frame) to find TB.

Note: The range of the random numbers increases after each collision. The value of Kmax is
usually chosen as 15.

Figure 3: Procedure for pure ALOHA protocol

PROBLEM 1
The stations on a wireless ALOHA network are a maximum of 600
km apart. If we assume that signals propagate at 3 × 108 m/s, find Tp . Assume K = 2, Find
the range of R.
Solution: Tp = (600 × 103) / (3 × 108) = 2 ms.
The range of R is, R=( 0 to 2k ) = {0, 1, 2, 3}.
This means that TB can be 0, 2, 4, or 6 ms, based on the outcome of
the random variable R.
Page 4
Computer Networks

Vulnerable time
The length of time in which
there is a possibility of collision. The stations send fixed-length frames with each frame
taking Tfr seconds to send. Figure 4 shows the vulnerable time for station B.

Figure 4: Vulnerable time for pure ALOHA protocol


Station B starts to send a frame at time t. Imagine station A has started to
send its frame after t − Tfr . This leads to a collision between the frames
from station B and station A. On the other hand, suppose that station C
starts to send a frame before time t + T fr. There is also a collision between
frames from station B and station C.
From the Figure 4, it can be seen that the vulnerable time during which a
collision may occur in pure ALOHA is 2 times the frame transmission
time.
Pure ALOHA vulnerable time = 2 ×Tfr

PROBLEM 2

A pure ALOHA network transmits 200-bit frames on a shared channel of


200 kbps. What is the requirement to make this frame collision-free?
Solution: Average frame transmission time Tfr is 200 bits/200 kbps or 1
ms.
The vulnerable time is 2 × 1 ms = 2 ms.
This means no station should send later than 1 ms before
this station starts transmission and no station should start sending
during the period (1 ms) that this station is sending .

Page 5
Computer Networks

Throughput
G = the average number of
frames generated by the system during one frame transmission time (T fr)
S= the average number of
successfully transmitted frames for pure ALOHA. And is given by, S = G × e−2G.
-------------------------(1)
Differentiate equation (1)
with respect to G and equate it to 0, we get G = 1/2. Substitute G=1/2 in equation (1) to get
Smax.
The maximum throughput
Smax = 0.184.
If one-half a frame is
generated during one frame transmission time (one frame during two frame transmission
times), then 18.4 percent of these frames reach their destination successfully.
G is set to G = 1/2 to
produce the maximum throughput because the vulnerable time is 2 times the frame
transmission time. Therefore, if a station generates only one frame in this vulnerable time
(and no other stations generate a frame during this time), the frame will reach its destination
successfully.

NOTE: The throughput for


pure ALOHA is S = G × e-2G.
The maximum throughput
Smax = 1/(2e) = 0.184 when G = (1/2).

PROBLEM 3
A pure ALOHA network
transmits 200-bit frames on a shared channel of 200 kbps. What is the throughput if the
system (all stations together) produces
a.1000 frames per second?
b.500 frames per second?
c.250 frames per second?
Solution: The frame
transmission time Tfr is 200/200 kbps or 1 ms.
(a)If the system creates
1000 frames per second, or 1 frame per millisecond ( 1s = 1000ms) then G = 1 (because G=
Page 6
number of frames generated for one Tfr).
S = G × e−2G = 0.135
(13.5 percent). This means that the throughput is
Computer Networks

500 ×0.184 = 92 frames.


Only 92 frames out of 500 will probably survive.
(c) If the system creates
250 frames per second, or 1/4 frame per millisecond ( 1s = 1000ms) then G = 1/4
(because G= number of frames generated for one Tfr).
S = G × e−2G = 0.152
(15.2 percent). This means that the throughput is
250 ×0.152 = 135 frames.
Only 38 frames out of 250 will probably survive.

Slotted ALOHA
Pure ALOHA has a
vulnerable time of 2 × Tfr. This is so because there is no rule that defines when the station
can send.
A station may send soon
after another station has started or just before another station has finished. Slotted
ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA we
divide the time into slots of Tfr seconds and force the station
to send only at the
beginning of the time slot. Figure 5 shows an example of frame collisions in slotted
ALOHA.

Figure 5: Frames in Slotted ALOHA network

A station is allowed to send only at the beginning of the


synchronized time slot, if a station misses this moment, it must
wait until the beginning of the next time slot.
This means that the station which started at the beginning of this slot
has already finished sending its frame.
There is still the possibility of collision if two stations try to send at
the beginning of the same time slot. However, the vulnerable time is
now reduced to one-half, equal to Tfr. Figure 6 shows the situation.

Page 7
Computer Networks

Figure 6: Vulnerable time for slotted ALOHA protocol


Slotted ALOHA vulnerable time = Tfr

Throughput
G = the average number of frames generated by the system during one
frame transmission time (Tfr)
S= the average number of successfully transmitted frames for Slotted
ALOHA. And is given by, S = G × e−G. -------------------------(1)
Differentiate equation (1) with respect to G and equate it to 0, we get G
= 1. Substitute G=1 in equation (1) to get Smax.
The maximum throughput Smax = 0.368.
If one frame is generated during one frame transmission time then 36.8
percent of these frames reach their destination successfully.
G is set to G = 1 to produce the maximum throughput because
the vulnerable time is equal to the frame transmission time. Therefore,
if a station generates only one frame in this vulnerable time (and no
other stations generate a frame during this time), the frame will reach its
destination successfully.

NOTE: The throughput for Slotted ALOHA is S =


G × e-G.
The maximum throughput Smax = 1/(e) = 0.368 when G = 1.

PROBLEM 3
A Slotted ALOHA network transmits 200-bit frames on a shared
channel of 200 kbps. What is the throughput if the system (all stations
together) produces Page 8
a. 1000 frames per second?
Computer Networks

b.500 frames per second?


c.250 frames per second?
Solution: The frame transmission time Tfr is 200/200 kbps or 1 ms.
(a)If the system creates 1000 frames per second, or 1 frame per millisecond ( 1s = 1000ms)
then G = 1 (because G= number of frames generated for one Tfr).
S = G × e−G = 0.368 (36.8 percent). This means that the throughput is
1000 ×0.368 = 368 frames. Only 368 frames out of 1000 will probably survive.
(b)If the system creates 500 frames per second, or 1/2 frame per millisecond ( 1s = 1000ms)
then G = 1/2 (because G= number of frames generated for one Tfr).
S = G × e−G = 0.303 (30.3 percent). This means that the throughput is
500 ×0.303 = 151 frames. Only 151 frames out of 500 will probably survive.
(c)If the system creates 250 frames per second, or 1/4 frame per millisecond ( 1s = 1000ms)
then G = 1/4 (because G= number of frames generated for one Tfr).
S = G × e−G = 0.195 (19.5 percent). This means that the throughput is 250 ×0.195 = 49 frames.
Only 49 frames out of 250 will probably survive.

CSMA
To minimize the chance of collision and, therefore, increase the performance, the CSMA
method was developed. The chance of collision can be reduced if a station senses the
medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each station first listen to the
medium (or check the state of the medium) before sending.
CSMA is based on the principle “sense before transmit” or “listen before talk.”
CSMA can reduce the possibility of collision, but it cannot eliminate it. The reason for this is
shown in Figure 7, a space and time model of a CSMA network. Stations are connected to a
shared channel.
The possibility of collision still exists because of propagation delay; when a station sends a
frame, it still takes time (although very short) for the first bit to reach every station and for
every station to sense it.
A station may sense the medium and find it idle, only because the first bit sent by another
station has not yet been received.

Page 9
Computer Networks

Figure 7: Space/time model of a collision in CSMA

At time t1, station B senses the medium and finds it idle, so it sends a
frame. At time t2 (t2 > t1), station C senses the medium and finds it idle
because, at this time, the first bits from station B have not reached
station C. Station C also sends a frame. The two signals collide and both
frames are destroyed.
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp. This is the
time needed for a signal to propagate from one end of the medium to the
other.
When a station sends a frame and any other station tries to send a frame
during this time, a collision will result.
But if the first bit of the frame reaches the end of the medium, every
station will already have heard the bit and will refrain from sending.
Figure 8 below shows the worst case. The leftmost station, A, sends a
frame at time t1, which reaches the rightmost station, D, at time t1
+ Tp. The gray area shows the vulnerable area in time and space.

Figure 8: Vulnerable time in CSMA

Page 10
Computer Networks

Persistence Methods
Persistence method is
developed to determine what the station has to do whenever it encounters the channel is idle
or busy. There are 3 persistent methods
1.1-persistent method
2.Non persistent method,
and
3.p-Persistent method.
Figure 9 shows the
behaviour of three persistence methods when a station finds a channel busy.

Figure 9: Behaviour of three persistence methods


1-Persistent
The 1-persistent method is simple and straightforward.
After the station finds the line idle, it sends its frame immediately
(with probability 1).
This method has the highest chance of collision because two or more
stations may find the line idle and send their frames immediately.

Non persistent
In the non persistent method, a station that has a frame to send senses
the line. If the line is idle, it sends immediately. If the line is not idle, it
waits a random amount of time and then senses the line again.
The non persistent approach reduces the chance of collision because it
is unlikely that two or more stations will wait the same amount of time
and retry to send simultaneously.
This method reduces the efficiency of the network because the
Page
medium remains idle when there may be stations with frames to 11
send.
Computer Networks

Figure 10: Flow diagram for three persistence methods


p-Persistent
The p-persistent method is used if the channel has time slots with a slot duration equal to or
greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two strategies. It reduces the
chance of collision and improves efficiency. In this method, after the station finds the line idle
it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 − p, the station waits for the beginning of the next time slot
and checks the line again.
(a)If the line is idle, it goes to step 1.
(b)If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.

Page 12
Computer Networks

CSMA/CD
The CSMA method does
not specify the procedure following a collision. Carrier sense multiple access with collision
detection (CSMA/CD) augments the algorithm to handle the collision.
Station monitors the
medium after it sends a frame to see if the transmission was successful.
The first bits transmitted
by the two stations involved in the collision. Although each station continues to send
bits in the frame until it detects the collision
In Figure 11, stations A and
C are involved in the collision.

Figure 11: Collision of the first bits in CSMA/CD

At time t1, station A has executed its persistence procedure and starts sending the bits of
its frame. At time t2, station C has not yet sensed the first bit sent by A.
Station C executes its persistence procedure and starts sending the bits in its frame,
which propagate both to the left and to the right.
The collision occurs sometime after time t2. Station C detects a collision at time t3 when
it receives the first bit of A’s frame. Station C immediately aborts transmission.
Station A detects collision at time t4 when it receives the first bit of C’s frame, it also
immediately aborts transmission.

From the Figure 11, A transmits for the duration t4 -t1, C transmits for the duration t3 -t2.

Page 13
Computer Networks

Figure 12: Collision and abortion in CSMA/CD


Minimum Frame Size
Before sending the last bit of the frame, the sending station must
detect a collision, if any, and abort the transmission.
Once the entire frame is sent, station does not keep a copy of
the frame and does not monitor the line for collision detection.
Therefore, the frame transmission time Tfr must be at least two times
the maximum propagation time Tp.
If the two stations involved in a collision are the maximum distance
apart, the signal from the first takes time Tp to reach the second, and
the effect of the collision takes another time Tp to reach the first.
So the requirement is that the first station must still be
transmitting after 2Tp.

PROBLEM:
A network using CSMA/CD has a bandwidth of 10 Mbps. If the
maximum propagation time (including the delays in the devices and
ignoring the time needed to send a jamming signal) is
25.6 µs, what is the minimum size of the frame?
Solution:
The minimum frame transmission time is Tfr = 2 × Tp = 51.2 µs.
This means, in the worst case, a station needs to transmit for a period
of 51.2 µs to detect the collision.
The minimum size of the frame is, Band width × Tfr = 10 Mbps ×
51.2 µs = 512 bits or 64
Page 14
bytes. This is actually the minimum size of the frame for Standard
Ethernet.
Computer Networks

Figure 13: Flow diagram for the CSMA/CD


The flow diagram for CSMA/CD is as shown in Figure 13. It is
similar to the one for the ALOHA protocol, but there are
differences.
1.The first difference is the addition of the persistence process. It is
required to sense the
channel before sending the frame by using one of the
persistence processes (non persistent, 1 persistent, or p-persistent).
2.The second difference is the frame transmission. In ALOHA, there
is transmission of the entire frame and then wait for an
acknowledgment. In CSMA/CD, transmission and collision
detection are continuous processes.
 It is not like the entire frame is sent and then look for a collision. The station transmits
and receives continuously and simultaneously (using two different ports or
a bidirectional port).
 Loop is used to show that transmission is a continuous process. It is constantly
monitored in order to detect one of two conditions: either transmission is finished or a
collision is detected.
 Either event stops transmission. When it comes out of the loop, if a collision has not
been detected, it means that transmission is complete; the entire frame is transmitted.
Otherwise, a collision has occurred.
3. The third difference is the sending of a short jamming signal to
Page 15
make sure that all other stations become aware of the collision .
Computer Networks

Energy Level
The level of energy in a
channel can have three values:
1)Zero level : The
channel is idle
2)Normal level: A station
has successfully captured the channel and is sending its frame.
3)Abnormal level: There
is a collision and the level of the energy is twice the normal level.

Figure 14: Energy level during transmission, idleness, or collision

NOTE: A station that has a frame to send or is sending a frame needs to


monitor the energy level to determine if the channel is idle, busy, or in
collision mode.

Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA.
The maximum throughput occurs at a different value of G and is based on
the persistence method and the value of p in the p-persistent approach.
For the 1-persistent method, the maximum throughput is around 50 percent
when G = 1. For the non persistent method, the maximum throughput can go
up to 90 percent when G is between 3 and 8.

Page 16
Computer Networks

CSMA/CA
Carrier sense multiple
access with collision avoidance (CSMA/CA) was invented for wireless networks.
Collisions are avoided
through the use of CSMA/CA’s three strategies: the inter frame space, the contention
window, and acknowledgments.

Inter frame Space (IFS):


When an idle channel is
found, the station does not send immediately. It waits for a period of time called the
inter frame space or IFS.
Even though the channel
may appear idle when it is sensed, a distant station may have already started transmitting.
The distant station’s
signal has not yet reached this station. The IFS time allows the front of the transmitted signal
by the distant station to reach this station.
After waiting an IFS time,
if the channel is still idle, the station can send, but it still needs to wait a time equal to the
contention window.
The IFS variable can also
be used to prioritize stations or frame types. For example, a station that is assigned a shorter
IFS has a higher priority.

Contention Window
The contention window is
an amount of time divided into slots. A station that is ready to send chooses a random number
of slots as its wait time.
The number of slots in the
window changes according to the binary exponential back off strategy. This means that it is
set to one slot the first time and then doubles each time the station cannot detect an idle
channel after the IFS time.
This is very similar to the
p-persistent method except that a random outcome defines the number of slots taken by the
waiting station.
One interesting point
Page 17
about the contention window is that the station needs to sense the channel after each time slot.
However, if the station finds the channel busy, it does not restart the process; it just stops the
Computer Networks

Figure 16: Contention window

Acknowledgement
Even with all the precautions considered, there still may be a
collision resulting in destroyed data. In addition, the data may be corrupted during
the transmission. The positive
acknowledgment and the time-out timer can help guarantee
that the receiver has received the frame.

Frame Exchange Time Line


Figure 17 shows the exchange of data and control frames in
time.

Figure 17: CSMA/CA and NAV

Page 18
Computer Networks

1.Before sending a frame, the source station senses the medium by checking the energy level
at the carrier frequency.
a.The channel uses a persistence strategy with back off until the channel is idle.
b.After the station is found to be idle, the station waits for a period of time called the
DCF inter frame space (DIFS),then the station sends a control frame called the request
to send (RTS).
2.After receiving the RTS and waiting a period of time called the short inter frame space
(SIFS), the destination station sends a control frame, called the clear to send (CTS), to the
source station. This control frame indicates that the destination station is ready to
receive data.
3.The source station sends data after waiting an amount of time equal to SIFS.
4.The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is needed in
this protocol because the station does not have any means to check for the successful arrival of
its data at the destination. On the other hand, the lack of collision in CSMA/CD is a kind of
indication to the source that data have arrived.

NOTE: DIFS=DCF Inter frame space or Distributed Coordination Function Inter frame Space
time.
Network Allocation Vector
When a station sends an RTS frame, it includes the duration of time that it needs to
occupy the channel.
The stations that are affected by this transmission create a timer called a Network
Allocation Vector (NAV) that shows how much time must pass before these stations are
allowed to check the channel for idleness.
Each time a station accesses the system and sends an RTS frame, other stations start their
NAV. In other words, each station, before sensing the physical medium to see if it is idle, first
checks its NAV to see if it has expired. Figure 17 shows the idea of NAV.

Collision during Handshaking


If there is a collision during the time when RTS or CTS control frames are in
transition, often called the handshaking period.

Page 19
Computer Networks

Two or more stations may try to send RTS frames at the same time. These control frames
may collide. However, because there is no mechanism for collision detection, the sender
assumes there has been a collision if it has not received a CTS frame from the receiver. The
back off strategy is employed, and the sender tries again.

Hidden-Station Problem
The solution to the hidden station problem is the use of the handshake frames (RTS and
CTS). Figure 17 shows that the RTS message from A reaches B, but not C.
Both A and C are within the range of B, the CTS message, which contains the duration of
data transmission from B to A, reaches C.
Station C knows that some hidden station is using the channel and refrains from transmitting
until that duration is over.

CONTROLLED ACCESS
In controlled access, the stations consult one another to find which station has the right to
send. A station cannot send unless it has been authorized by other stations.
There are three controlled access methods,
1. Reservation.
2. Polling.
3. Token passing.

1. Reservation

Figure 18: Reservation access method


In the reservation method, a station needs to make a reservation
before sending data.
Time is divided into intervals. In each interval, a
reservation frame precedes the data frames sent in that interval.

Page 20
Computer Networks

If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each mini slot belongs to a station. When a station needs to send a data
frame, it makes a reservation in its own minislot. The stations that have made reservations can
send their data frames after the reservation frame.
Figure 18 shows a situation with five stations and a five-minislot reservation frame. In the
first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only
station 1 has made a reservation.

2. Polling
Polling works with topologies in which one device is designated as a primary station and the
other devices are secondary stations.
All data exchanges must be made through the primary device even when the ultimate
destination is a secondary device. The primary device controls the link; the secondary
devices follow its instructions.
The primary device determines which device is allowed to use the channel at a given
time. The primary device, therefore, is always the initiator of a session
This method uses poll and select functions to prevent collisions. However, the drawback is if
the primary station fails, the system goes down.

Figure 19: Select and poll functions in polling-access method

Page 21
Computer Networks

Select
The select function is used
whenever the primary device has something to send. Since the primary controls the link. If it
is neither sending nor receiving data, it knows the link is available.
If it has something to
send, the primary device sends it. The primary station has to confirm whether the target
device is prepared to receive.
The primary must alert
the secondary to the upcoming transmission and wait for an acknowledgment of the
secondary’s ready status. Before sending data, the primary creates and transmits a select
(SEL) frame, one field of which includes the address of the intended secondary.
Poll
The poll function is
used by the primary device to solicit transmissions from the secondary devices.
When the primary is ready
to receive data, it must ask (poll) each device in turn if it has anything to send. When the first
secondary is approached, it responds either with a NAK frame if it has nothing to send or with
data (in the form of a data frame) if it does.
If the response is negative
(a NAK frame), then the primary polls the next secondary in the same manner until it finds
one with data to send.
When the response is
positive (a data frame), the primary reads the frame and returns an acknowledgment (ACK
frame), verifying its receipt.

3. Token Passing
In the token-passing
method, the stations in a network are organized in a logical ring. For
each station, there is a
predecessor and a successor. The predecessor is the station which is logically before the
station in the ring; the successor is the station which is after the station in the ring. The
current station is the one that is accessing the channel now .
The right to this access
has been passed from the predecessor to the current station. The right will be passed to the
successor when the current station has no more data to send.
Page 22
In this method, a special
packet called a token circulates through the ring. The possession of the token gives the station
Computer Networks

When the station has no more data to send, it releases the token, passing it to the next logical
station in the ring. The station cannot send data until it receives the token again in the next
round. In this process, when a station receives the token and has no data to send, it just passes
the data to the next station.
Token management is needed for this access method. Stations must be limited in the time
they can have possession of the token. The token must be monitored to ensure it has not been
lost or destroyed. For example, if a station that is holding the token fails, the token will
disappear from the network.
Another function of token management is to assign priorities to the stations and to
the types of data being transmitted. And finally, token management is needed to make low-
priority stations release the token to high-priority stations.

Logical Ring
In a token-passing network, stations do not have to be physically connected in a ring; the ring
can be a logical one. Figure 20 shows four different physical topologies that can
create a
logical ring.

Figure 20: Logical ring and physical topology in token-passing access method

Page 23
Computer Networks

(a) Physical ring :


In the physical ring topology, when a station sends the token to its successor,
the token cannot be seen by other stations, the successor is the next one in line.
This means that the token does not have to have the address of the next successor.
The problem with this topology is that if one of the links, the medium between two
adjacent stations fails, the whole system fails.
(b) Dual ring:
The dual ring topology uses a second (auxiliary) ring which operates in the reverse
direction compared with the main ring. The second ring is for emergencies only (such as a
spare tire for a car).
If one of the links in the main ring fails, the system automatically combines the two rings
to form a temporary ring. After the failed link is restored, the auxiliary ring
becomes idle again.
Each station needs to have two transmitter ports and two receiver ports. The
high- speed Token Ring networks called FDDI (Fiber Distributed Data Interface) and
CDDI (Copper Distributed Data Interface) use this topology.
(c) Bus Ring:
In the bus ring topology, also called a token bus, the stations are connected to a single cable called a bus.
They, make a logical ring, because each station knows the address of its successor (and also
predecessor for token management purposes).
When a station has finished sending its data, it releases the token and inserts the address of its
successor in the token. Only the station with the address matching the destination address of the token gets
the token to access the shared media. The Token Bus LAN, standardized by IEEE, uses this
topology.
(d) Star Ring:
In a star ring topology, the physical topology is a star. There is a hub, however, that acts
as the connector.
The wiring inside the hub makes the ring; the stations are connected to this ring
through the two wire connections.
This topology makes the network less prone to failure because if a link goes down, it will
be bypassed by the hub and the rest of the stations can operate.
Adding and removing stations from the ring is easier. This topology is still used in the
Token Ring LAN designed by IBM.

Page 24
Computer Networks - Module 2

MODULE 2 (continued..)

Wired LANs: Ethernet


ETHERNET PROTOCOL

A local area network (LAN) is a computer network that is designed for a limited geographic
area such as a building or a campus. Although a LAN can be used as an isolated network to
connect computers in an organization for the sole purpose of sharing resources, most LANs
today are also linked to a wide area network (WAN) or the Internet. Almost every
LAN except Ethernet has disappeared from the marketplace because Ethernet was able to
update itself to meet the needs of the time

IEEE Project 802


In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers.
Project 802 does not seek to replace any part of the OSI model or TCP/IP protocol suite.
Instead, it is a way of specifying functions of the physical layer and the data-link layer of
major LAN protocols.
The relationship of the 802 Standard to the TCP/IP protocol suite is shown in Figure 13.1.
The IEEE has subdivided the data-link layer into two sub layers:
 Logical link control (LLC)
 Media access control (MAC)
IEEE has also created several physical-layer standards for different LAN protocols.

Figure 1: IEEE standard for LANs

Page 1
Computer Networks - Module 2

Logical Link Control (LLC)


In IEEE Project 802, flow control, error control, and part of the framing duties are
collected into one sub layer called the logical link control (LLC). Framing is handled in both
the LLC sublayer and the MAC sublayer.
The LLC provides a single link-layer control protocol for all IEEE LANs. This
means LLC protocol can provide interconnectivity between different LANs because it makes
the MAC sub layer transparent.

Media Access Control (MAC)


IEEE Project 802 has created a sublayer called media access control that defines the
specific access method for each LAN. For example, it defines CSMA/CD as the media access
method for Ethernet LANs and defines the token-passing method for Token Ring and Token
Bus LANs.
Part of the framing function is also handled by the MAC layer.

Ethernet Evolution
The Ethernet LAN was developed in the 1970s by Robert Metcalfe and David Boggs. The
four generations of Ethernet are :
1. Standard Ethernet (10 Mbps)
2. Fast Ethernet (100 Mbps)
3. Gigabit Ethernet (1 Gbps) and
4. 10 Gigabit Ethernet (10 Gbps)

Figure 2: Ethernet evolution through four generations

Page 2
Computer Networks - Module 2

STANDARD ETHERNET
Characteristics
1. Connectionless and Unreliable Service
Ethernet provides a connectionless service, which means each frame sent is
independent of the previous or next frame. Ethernet has no connection establishment or
connection termination phases.
The sender sends a frame whenever it has, the receiver may or may not be ready for it.
The sender may overwhelm the receiver with frames, which may result in dropping frames.
If a frame drops, the sender will not know about it. Since IP, which is using the service of
Ethernet, is also connectionless, it will not know about it either. If the transport layer is
also a connectionless protocol, such as UDP, the frame is lost and salvation may only come
from the application layer. However, if the transport layer is TCP, the sender TCP does not
receive acknowledgment for its segment and sends it again.
Ethernet is also unreliable like IP and UDP. If a frame is corrupted
during
transmission and the receiver finds out about the corruption, which has a high level of
probability of happening because of the CRC-32, the receiver drops the frame silently. It is
the duty of high-level protocols to find out about it.

2. Frame Format
The Ethernet frame contains seven fields, as shown in Figure 3

Figure 3: Ethernet frame

Page 3
Computer Networks - Module 2

Preamble. This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert
the receiving system to the coming frame and enable it to synchronize its clock if it’s out of
synchronization. The pattern provides only an alert and a timing pulse. The 56-bit pattern
allows the stations to miss some bits at the beginning of the frame. The preamble is
actually added at the physical layer and is not part of the frame.

Start frame delimiter (SFD). This field (1 byte: 10101011) signals the beginning of the
frame. The SFD warns the station or stations that this is the last chance for
synchronization. The last 2 bits are (11)2 and alert the receiver that the next field is the
destination address. This field is actually a flag that defines the beginning of the frame, an
Ethernet frame is a variable-length frame. It needs a flag to define the beginning of the frame.
The SFD field is also added at the physical layer.

Destination address (DA). This field is six bytes (48 bits) and contains the link
layer address of the destination station or stations to receive the packet. When the receiver
sees its own link-layer address, or a multicast address for a group that the receiver is a
member of, or a broadcast address, it decapsulates the data from the frame and passes the data
to the upper layer protocol defined by the value of the type field.

Source address (SA). This field is also six bytes and contains the link-layer address of the
sender of the packet.

Type. This field defines the upper-layer protocol whose packet is encapsulated in
the frame. This protocol can be IP, ARP, OSPF, and so on. In other words, it serves the same
purpose as the protocol field in a datagram and the port number in a segment or user
datagram. It is used for multiplexing and demultiplexing.

Data. This field carries data encapsulated from the upper-layer protocols. It is a minimum
of 46 and a maximum of 1500 bytes. If the data coming from the upper layer is more than
1500 bytes, it should be fragmented and encapsulated in more than one frame. If it is less than
46 bytes, it needs to be padded with extra 0s. A padded data frame is delivered to the upper-
layer protocol as it is (without removing the padding), which means that it is the
responsibility of the upper layer to remove or, in the case of the sender, to add the

Page 4
Computer Networks - Module 2

padding. The upper-layer protocol needs to know the length of its data. For example,
a datagram has a field that defines the length of the data.
CRC. The last field contains error detection information, in this case a CRC-32. The CRC
is calculated over the addresses, types, and data field. If the receiver calculates the CRC and
finds that it is not zero (corruption in transmission), it discards the frame.

3. Frame Length
Ethernet has imposed restrictions on both the minimum and maximum lengths of a
frame. The minimum length restriction is required for the correct operation of
CSMA/CD.
An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes. Part of this
length is the header and the trailer. If we count 18 bytes of header and trailer (6 bytes of
source address, 6 bytes of destination address, 2 bytes of length or type, and 4 bytes of CRC),
then the minimum length of data from the upper layer is 64 − 18 = 46 bytes. If the upper-layer
packet is less than 46 bytes, padding is added to make up the difference.
The standard defines the maximum length of a frame (without preamble and SFD
field) as 1518 bytes. If we subtract the 18 bytes of header and trailer, the maximum length of
the payload is 1500 bytes.
The maximum length restriction has two historical reasons. First, memory was very
expensive when Ethernet was designed; a maximum length restriction helped to reduce
the size of the buffer. Second, the maximum length restriction prevents one station
from monopolizing the shared medium, blocking other stations that have data to send.

NOTE:
Minimum frame length: 64 bytes
Minimum data length: 46 bytes
Maximum frame length: 1518 bytes
Maximum data length: 1500 bytes

Page 5
Computer Networks - Module 2

Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its
own network interface card (NIC). The NIC fits inside the station and provides the station
with a link-layer address. The Ethernet address is 6 bytes (48 bits), normally written in
hexadecimal notation, with a colon between the bytes. For example, the following
shows an Ethernet MAC address:
4A:30:10:21:10:1A

Transmission of Address Bits


The way the addresses are sent out online is different from the way they are written
in hexadecimal notation. The transmission is left to right, byte by byte; however, for each
byte,
the least significant bit is sent first and the most significant bit is sent last. This means that the
bit that defines an address as unicast or multicast arrives first at the receiver. This helps the
receiver to immediately know if the packet is unicast or multicast.

Example
Show how the address 47:20:1B:2E:08:EE is sent out online.
Solution: The address is sent left to right, byte by byte; for each byte, it is sent right to left, bit
by bit, as shown below

Unicast, Multicast, and Broadcast Addresses

A source address is always a unicast address, the frame comes from only one station. The
destination address, however, can be unicast, multicast, or broadcast. Figure 4 shows how to

Page 6
Computer Networks - Module 2

distinguish a unicast address from a multicast address. If the least significant bit of the first
byte in a destination address is 0, the address is unicast; otherwise, it is multicast. With the
way the bits are transmitted, the unicast/multicast bit is the first bit which is transmitted or
received. The broadcast address is a special case of the multicast address: the recipients are all
the stations on the LAN. A broadcast destination address is forty-eight 1s.

Example
Define the type of the following destination addresses a. 4A:30:10:21:10:1A
b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF
Solution: To find the type of the address, we need to look at the second hexadecimal digit
from the left. If it is even, the address is unicast. If it is odd, the address is multicast. If all
digits are Fs, the address is broadcast. Therefore, we have the following:
a.This is a unicast address because A in binary is 1010 (even).
b.This is a multicast address because 7 in binary is 0111 (odd).
c.This is a broadcast address because all digits are Fs in hexadecimal.

Access Method
Since the network that uses the standard Ethernet protocol is a broadcast network,
The standard Ethernet chose CSMA/CD with 1-persistent method, Let us use a scenario to
see how this method works for the Ethernet protocol.

Figure 5: Implementation of standard Ethernet

Page 7
Computer Networks - Module 2

 Assume station A in Figure.5 has a frame to send to station D. Station A first


should check whether any other station is sending (carrier sense). Station A measures the
level of energy on the medium (for a short period of time, normally less than 100µs). If
there is no signal energy on the medium, it means that no station is sending (or the signal
has not reached station A). Station A interprets this situation as idle medium. It starts
sending its frame. On the other hand, if the signal energy level is not zero, it means that
the medium is being used by another station. Station A continuously monitors the
medium until it becomes idle for 100µs. It then starts sending the frame. However,
station A needs to keep a copy of the frame in its buffer until it is sure that there is no
collision.
 The medium sensing does not stop after station A has started sending the frame. Station A
needs to send and receive continuously. Two cases may occur:
(a) Station A has sent 512 bits and no collision is sensed (the energy level did not
go
above the regular energy level), the station then is sure that the frame will go through and
stops sensing the medium. Where does the number 512 bits come from? If we
consider the transmission rate of the Ethernet as 10 Mbps, this means that it takes the
station 512/(10 Mbps) = 51.2 μs to send out 512 bits. With the speed of propagation in a
cable (2 × 108 meters), the first bit could have gone 10,240 meters (one way) or only
5120 meters (round trip), have collided with a bit from the last station on the
cable, and have gone back. In other words, if a collision were to occur, it should occur by
the time the sender has sent out 512 bits (worst case) and the first bit has made a round
trip of 5120 meters, if the collision happens in the middle of the cable, not at the end,
station A hears the collision earlier and aborts the transmission. The above assumption is
that the length of the cable is 5120 meters. The designer of the standard Ethernet actually
put a restriction of 2500 meters because we need to consider the delays
encountered throughout the journey. It means that they considered the worst case. The
whole idea is that if station A does not sense the collision before sending 512 bits, there
must have been no collision, because during this time, the first bit has reached the end of
the line and all other stations know that a station is sending and refrain from sending. In
other words, the problem occurs when another station (for example, the last station)
starts sending before the first bit of station A has reached it. The other station mistakenly
thinks that the line is free because the first bit has not yet reached it. The restriction of 512
bits actually helps the sending station: The sending Page 8
Computer Networks - Module 2

station is certain that no collision will occur if it is not heard during the first 512 bits, so it can
discard the copy of the frame in its buffer.
(b)Station A has sensed a collision before sending 512 bits. This means that one of the
previous bits has collided with a bit sent by another station. In this case both stations should
refrain from sending and keep the frame in their buffer for resending when the line becomes
available. However, to inform other stations that there is a collision in the network, the station
sends a 48-bit jam signal. The jam signal is to create enough signal (even if the collision
happens after a few bits) to alert other stations about the collision. After sending the jam
signal, the stations need to increment the value of K (number of attempts). If after
increment K = 15, the experience has shown that the network is too busy, the station
needs to abort its effort and try again. If K < 15, the station can wait a backoff time (TB)
and restart the process. The station creates a random number between 0 and 2 K − 1, which
means each time the collision occurs, the range of the random number increases
exponentially. After the first collision (K =
1)the random number is in the range (0, 1). After the second collision (K = 2) it is in
the range (0, 1, 2, 3). After the third collision (K = 3) it is in the range (0, 1, 2, 3, 4, 5,
6, 7). So after each collision, the probability increases that the backoff time becomes
longer. This is due to the fact that if the collision happens even after the third or fourth
attempt, it means that the network is really busy; a longer backoff time is needed.

Efficiency of Standard Ethernet


The efficiency of the Ethernet is defined as the ratio of the time used by a station to send
data to the time the medium is occupied by this station. The practical efficiency of
standard Ethernet has been measured to be,
1
Efficiency =
1 a

Where, a = the number of frames that can fit on the medium.


Propogation delay (Tp)
=
Transmission delay (Tf)

The transmission delay is the time it takes a frame of average size to be sent out and
the propagation delay is the time it takes to reach the end of the medium. As the

value of parameter a decreases, the efficiency increases. This means that if the length of the
media is

Page 9
Computer Networks - Module 2

shorter or the frame size longer, the efficiency increases. In the ideal case, a= 0 and
the efficiency is 1.

Example 13.3
In the Standard Ethernet with the transmission rate of 10 Mbps, we assume that the length of
the medium is 2500 m and the size of the frame is 512 bits. The propagation speed of a signal
in a cable is normally 2 × 108 m/s.

The example shows that a = 0.24, which means only 0.24 of a frame occupies the
whole medium in this case. The efficiency is 39 percent, which is considered moderate; it
means that only 61 percent of the time the medium is occupied but not used by a station.

Implementation
The Standard Ethernet defined several implementations, but only four of them became
popular during the 1980s. Table below shows a summary of Standard Ethernet
implementations.

In the nomenclature 10BaseX, the number defines the data rate (10 Mbps), the term
Base means baseband (digital) signal, and X approximately defines either the maximum size
of the cable in 100 meters (for example 5 for 500 or 2 for 185 meters) or the type of cable, T
for unshielded twisted pair cable (UTP) and F for fiber-optic. The standard Ethernet
uses a baseband signal, which means that the bits are changed to a digital signal and directly
sent on the line.

Encoding and Decoding


All standard implementations use digital signalling (baseband) at 10 Mbps. At the sender,
data are converted to a digital signal using the Manchester scheme; at the receiver,
the
Page
received signal is interpreted as Manchester and decoded into data. Manchester encoding is 10
Computer Networks - Module 2

self-synchronous, providing a transition at each bit interval. Figure 6 shows the


encoding scheme for Standard Ethernet.

Figure 6: Encoding in a Standard Ethernet implementation


10Base5: Thick Ethernet
The first implementation is called 10Base5, thick Ethernet, or Thicknet. The nickname
derives from the size of the cable, which is roughly the size of a garden hose and too stiff to
bend with your hands. 10Base5 was the first Ethernet specification to use a bus topology with
an external transceiver (transmitter/receiver) connected via a tap to a thick coaxial
cable.
Figure 7 shows a schematic diagram of a 10Base5 implementation.

Figure 7 : 10Base5 implementation


The transceiver is responsible for transmitting, receiving, and detecting collisions. The
transceiver is connected to the station via a transceiver cable that provides separate paths for
sending and receiving. This means that collision can only happen in the coaxial cable.
The maximum length of the coaxial cable must not exceed 500 m, otherwise, there is
excessive degradation of the signal. If a length of more than 500 m is needed, up to
five segments, each a maximum of 500 meters, can be connected using repeaters.

10Base2: Thin Ethernet


The second implementation is called 10Base2, thin Ethernet, or Cheapernet. 10Base2
also uses a bus topology, but the cable is much thinner and more flexible. The cable can be
pass to
bent very close to the stations. In this case, the transceiver is normally part of the network

Page 11
Computer Networks - Module 2

interface card (NIC), which is installed inside the station. Figure 8 shows the
schematic diagram of a 10Base2 implementation.

Figure 8: 10Base2 implementation


The collision here occurs in the thin coaxial cable. This
implementation is more cost effective than 10Base5 because
thin coaxial cable is less expensive than thick coaxial and
the tee connections are much cheaper than taps. Installation is
simpler because the thin coaxial cable is very flexible. However,
the length of each segment cannot exceed 185 m (close to 200
m) due to the high level of attenuation in thin coaxial cable.

10Base-T: Twisted-Pair Ethernet


The third implementation is called 10Base-T or twisted-
pair Ethernet. 10Base-T uses a
physical star topology. The stations are connected to a hub via
two pairs of twisted cable, as shown in Figure 9.

Figure 9: 10Base-T implementation


Two pairs of twisted cable create two paths (one for sending and one for receiving) between
the station and the hub. Any collision here happens in the hub. Compared to 10Base5
or

Page 12
Computer Networks - Module 2

10Base2, we can see that the hub actually replaces the coaxial cable as far as a collision is
concerned. The maximum length of the twisted cable here is defined as 100 m, to minimize
the effect of attenuation in the twisted cable.

10Base-F: Fiber Ethernet


Although there are several types of optical fiber 10-Mbps Ethernet, the most common
is called 10Base-F. 10Base-F uses a star topology to connect stations to a hub. The stations
are
connected to the hub using two fiber-optic cables, as shown in Figure 10.

Figure 10: 10Base-F implementation

FAST ETHERNET (100 MBPS)


In the 1990s, some LAN technologies with transmission rates
higher than 10 Mbps, such as FDDI and Fiber Channel, appeared on the market. If the
Standard Ethernet wanted to survive, it had to compete with these technologies. Ethernet
made a big jump by increasing the transmission rate to 100 Mbps, and the new
generation was called the Fast Ethernet. The designers of the Fast Ethernet needed to
make it compatible with the Standard Ethernet. The MAC sublayer was left unchanged,
which meant the frame format and the maximum and minimum size could also remain
unchanged. By increasing the transmission rate, features of the Standard Ethernet that
depend on the transmission rate, access method, and implementation had to be
reconsidered.
The goals of Fast Ethernet can be summarized as follows:
1.Upgrade the data rate to 100 Mbps.
2.Make it compatible with Standard Ethernet.
3.Keep the same 48-bit address.
4.Keep the same frame format.

Page 13
Computer Networks - Module 2

Access Method
The proper operation of the CSMA/CD depends on the transmission rate, the minimum size
of the frame, and the maximum network length. If we want to keep the minimum size of the
frame, the maximum length of the network should be changed. In other words, if the
minimum frame size is still 512 bits, and it is transmitted 10 times faster, the collision needs
to be detected 10 times sooner, which means the maximum length of the network should be
10 times shorter (the propagation speed does not change). So the Fast Ethernet came with two
solutions (it can work with either choice):
1.The first solution was to totally drop the bus topology and use a passive hub and star
topology but make the maximum size of the network 250 meters instead of 2500
meters as in the Standard Ethernet. This approach is kept for compatibility with the Standard
Ethernet.
2.The second solution is to use a link-layer switch with a buffer to store frames and a
full-duplex connection to each host to make the transmission medium private for each host. In
this case, there is no need for CSMA/CD because the hosts are not competing with each other.
The link-layer switch receives a frame from a source host and stores it in the buffer (queue)
waiting for processing. It then checks the destination address and sends the frame out of the
corresponding interface. Since the connection to the switch is full-duplex, the destination
address can even send a frame to another station at the same time that it is receiving a
frame. In other words, the shared medium is changed to many point-to- point media, and
there is no need for contention.

Autonegotiation
A new feature added to Fast Ethernet is called autonegotiation. It allows a station or a hub
a range of capabilities. Autonegotiation allows two devices to negotiate the mode or data rate
of operation. It was designed particularly to allow incompatible devices to connect to one
another.
It was designed particularly for these purposes:
To allow incompatible devices to connect to one another. For example, a device with a
maximum capacity of 10 Mbps can communicate with a device with a 100 Mbps capacity (but
which can work at a lower rate).
To allow one device to have multiple capabilities.
To allow a station to check a hub’s capabilities.

Page 14
Computer Networks - Module 2

Physical Layer Topology


Fast Ethernet is designed to connect two or more stations. If there are only two stations, they
can be connected point-to-point. Three or more stations need to be connected in a star
topology with a hub or a switch at the center.
Encoding
Manchester encoding needs a 200-Mbaud bandwidth for a data rate of 100 Mbps,
which makes it unsuitable for a medium such as twisted-pair cable. For this reason,
the Fast
Ethernet designers sought some alternative encoding/decoding scheme. However, it
was found that one scheme would not perform equally well for all three
implementations.
Therefore, three different encoding schemes were chosen.

Figure 11: Encoding for fast Ethernet implementations


1. 100Base-TX uses two pairs of twisted-pair cable (either category 5
UTP or STP). For this implementation, the MLT-3 scheme was
selected since it has good bandwidth performance. However, since
MLT-3 is not a self-synchronous line coding scheme, 4B/5B block
coding is used to provide bit synchronization by preventing the
Page 15
Computer Networks - Module 2

occurrence of a long sequence of 0s and 1s. This creates a data rate of 125 Mbps,
which is fed into MLT-3 for encoding.
2.100Base-FX uses two pairs of fiber-optic cables. Optical fiber can easily handle high
bandwidth requirements by using simple encoding schemes. The designers of
100Base-FX selected the NRZ-I encoding scheme for this implementation. However, NRZ-I
has a bit synchronization problem for long sequences of 0s (or 1s, based on the encoding). To
overcome this problem, the designers used 4B/5B block encoding, as we described for
100Base-TX. The block encoding increases the bit rate from 100 to 125 Mbps, which can
easily be handled by fiber-optic cable.
A 100Base-TX network can provide a data rate of 100 Mbps, but it requires the use of
category 5 UTP or STP cable. This is not cost-efficient for buildings that have already been
wired for voice-grade twisted-pair (category 3).
3.100Base-T4, was designed to use category 3 or higher UTP. The implementation uses four
pairs of UTP for transmitting 100 Mbps. Encoding/decoding in 100Base-T4 is more
complicated. As this implementation uses category 3 UTP, each twisted-pair cannot
easily handle more than 25 Mbaud. In this design, one pair switches between sending and
receiving. Three pairs of UTP category 3, however, can handle only 75 Mbaud (25 Mbaud)
each. We need to use an encoding scheme that converts 100 Mbps to a 75 Mbaud signal.
8B/6T satisfies this requirement. In 8B/6T, eight data elements are encoded as six signal
elements. This means that 100 Mbps uses only (6/8) × 100 Mbps, or 75 Mbaud.

GIGABIT ETHERNET
The need for an even higher data rate resulted in the design of the Gigabit Ethernet Protocol
(1000 Mbps). The IEEE committee calls it the Standard 802.3z. The goals of the
Gigabit Ethernet were to upgrade the data rate to 1 Gbps, but keep the address length,
the frame format, and the maximum and minimum frame length the same. The goals
of the Gigabit Ethernet design can be summarized as follows:

Page 16
Computer Networks - Module 2

1.Upgrade the data rate to 1 Gbps.


2.Make it compatible with Standard or Fast Ethernet.
3.Use the same 48-bit address.
4.Use the same frame format.
5.Keep the same minimum and maximum frame lengths.
6.Support autonegotiation as defined in Fast Ethernet.

MAC Sublayer
A main consideration in the evolution of Ethernet was to keep the MAC sublayer untouched.
However, to achieve a data rate of 1 Gbps, this was no longer possible. Gigabit Ethernet has
two distinctive approaches for medium access: half-duplex and fullduplex. Almost all
implementations of Gigabit Ethernet follow the full-duplex approach, so we mostly
ignore the half-duplex mode.
Full-Duplex Mode
In full-duplex mode, there is a central switch connected to all computers or other switches. In
this mode, for each input port, each switch has buffers in which data are stored until they are
transmitted. Since the switch uses the destination address of the frame and sends a frame out
of the port connected to that particular destination, there is no collision. This means
that CSMA/CD is not used. Lack of collision implies that the maximum length of the
cable is determined by the signal attenuation in the cable, not by the collision detection
process.

NOTE: In the full-duplex mode of Gigabit Ethernet, there is no collision; the


maximum length of the cable is determined by the signal attenuation in the cable.

Half-Duplex Mode
The half-duplex approach uses CSMA/CD. the maximum length of the network in this
approach is totally dependent on the minimum frame size.
Three methods have been defined:
 Traditional
 Carrier extension, and
 Frame bursting.

Page 17

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy