Introduction To Data-Link Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

CHAPTER 9

Introduction to
Data-Link Layer

T he TCP/IP protocol suite does not define any protocol in the data-link layer or
physical layer. These two layers are territories of networks that when connected
make up the Internet. These networks, wired or wireless, provide services to the upper
three layers of the TCP/IP suite. This may give us a clue that there are several standard
protocols in the market today. For this reason, we discuss the data-link layer in several
chapters. This chapter is an introduction that gives the general idea and common issues
in the data-link layer that relate to all networks.
❑ The first section introduces the data-link layer. It starts with defining the concept
of links and nodes. The section then lists and briefly describes the services pro-
vided by the data-link layer. It next defines two categories of links: point-to-point
and broadcast links. The section finally defines two sublayers at the data-link layer
that will be elaborated on in the next few chapters.
❑ The second section discusses link-layer addressing. It first explains the rationale
behind the existence of an addressing mechanism at the data-link layer. It then
describes three types of link-layer addresses to be found in some link-layer proto-
cols. The section discusses the Address Resolution Protocol (ARP), which maps
the addresses at the network layer to addresses at the data-link layer. This protocol
helps a packet at the network layer find the link-layer address of the next node for
delivery of the frame that encapsulates the packet. To show how the network layer
helps us to find the data-link-layer addresses, a long example is included in this
section that shows what happens at each node when a packet is travelling through
the Internet.

237
238 PART III DATA-LINK LAYER

9.1 INTRODUCTION
The Internet is a combination of networks glued together by connecting devices (rout-
ers or switches). If a packet is to travel from a host to another host, it needs to pass
through these networks. Figure 9.1 shows the same scenario we discussed in Chapter 3,
but we are now interested in communication at the data-link layer. Communication at
the data-link layer is made up of five separate logical connections between the data-link
layers in the path.

Figure 9.1 Communication at the data-link layer

Sky Research Alice


Alice Application
Transport
Network
Data-link
Physical
R2
Network
To other Data-link
ISPs Physical
R1 R2
R4
Network
To other
R3 R4 Data-link
ISPs
Physical

Switched R5
WAN Network
National ISP R5 Data-link
Physical

ISP
R7
To other Network
ISPs Data-link
R6 R7 Physical
Legend
Point-to-point WAN
Bob
LAN switch
Application
Transport
WAN switch Network
Data-link
Router Bob Physical
Scientific Books

The data-link layer at Alice’s computer communicates with the data-link layer at router
R2. The data-link layer at router R2 communicates with the data-link layer at router R4,
CHAPTER 9 INTRODUCTION TO DATA-LINK LAYER 239

and so on. Finally, the data-link layer at router R7 communicates with the data-link
layer at Bob’s computer. Only one data-link layer is involved at the source or the desti-
nation, but two data-link layers are involved at each router. The reason is that Alice’s
and Bob’s computers are each connected to a single network, but each router takes
input from one network and sends output to another network. Note that although
switches are also involved in the data-link-layer communication, for simplicity we have
not shown them in the figure.

9.1.1 Nodes and Links


Communication at the data-link layer is node-to-node. A data unit from one point in the
Internet needs to pass through many networks (LANs and WANs) to reach another
point. Theses LANs and WANs are connected by routers. It is customary to refer to the
two end hosts and the routers as nodes and the networks in between as links. Figure 9.2
is a simple representation of links and nodes when the path of the data unit is only six
nodes.

Figure 9.2 Nodes and Links

Point-to-point Point-to-point
network network

LAN LAN LAN

a. A small part of the Internet

Link Link Link Link Link

Node Node Node Node Node Node


b. Nodes and links

The first node is the source host; the last node is the destination host. The other
four nodes are four routers. The first, the third, and the fifth links represent the three
LANs; the second and the fourth links represent the two WANs.

9.1.2 Services
The data-link layer is located between the physical and the network layers. The data-
link layer provides services to the network layer; it receives services from the physical
layer. Let us discuss services provided by the data-link layer.
The duty scope of the data-link layer is node-to-node. When a packet is travelling
in the Internet, the data-link layer of a node (host or router) is responsible for delivering
a datagram to the next node in the path. For this purpose, the data-link layer of the
sending node needs to encapsulate the datagram received from the network in a frame,
and the data-link layer of the receiving node needs to decapsulate the datagram from
the frame. In other words, the data-link layer of the source host needs only to
240 PART III DATA-LINK LAYER

encapsulate, the data-link layer of the destination host needs to decapsulate, but each
intermediate node needs to both encapsulate and decapsulate. One may ask why we
need encapsulation and decapsulation at each intermediate node. The reason is that
each link may be using a different protocol with a different frame format. Even if one
link and the next are using the same protocol, encapsulation and decapsulation are
needed because the link-layer addresses are normally different. An analogy may help in
this case. Assume a person needs to travel from her home to her friend’s home in
another city. The traveller can use three transportation tools. She can take a taxi to go to
the train station in her own city, then travel on the train from her own city to the city
where her friend lives, and finally reach her friend’s home using another taxi. Here we
have a source node, a destination node, and two intermediate nodes. The traveller needs
to get into the taxi at the source node, get out of the taxi and get into the train at the first
intermediate node (train station in the city where she lives), get out of the train and get
into another taxi at the second intermediate node (train station in the city where her
friend lives), and finally get out of the taxi when she arrives at her destination. A kind
of encapsulation occurs at the source node, encapsulation and decapsulation occur at
the intermediate nodes, and decapsulation occurs at the destination node. Our traveller
is the same, but she uses three transporting tools to reach the destination.
Figure 9.3 shows the encapsulation and decapsulation at the data-link layer. For
simplicity, we have assumed that we have only one router between the source and des-
tination. The datagram received by the data-link layer of the source host is encapsulated
in a frame. The frame is logically transported from the source host to the router. The
frame is decapsulated at the data-link layer of the router and encapsulated at another
frame. The new frame is logically transported from the router to the destination host.
Note that, although we have shown only two data-link layers at the router, the router
actually has three data-link layers because it is connected to three physical links.

Figure 9.3 A communication with only three nodes

Actual link
Legend 2 Data-link header
Logical link

Datagram Datagram Datagram

Data link 2 Datagram Data link Data link 2 Datagram Data link

Frame: type 1 Frame: type 2

Link: of type 1 Link: of type 2


Source Destination
To another link

With the contents of the above figure in mind, we can list the services provided by
a data-link layer as shown below.
CHAPTER 9 INTRODUCTION TO DATA-LINK LAYER 241

Framing
Definitely, the first service provided by the data-link layer is framing. The data-link
layer at each node needs to encapsulate the datagram (packet received from the network
layer) in a frame before sending it to the next node. The node also needs to decapsulate
the datagram from the frame received on the logical channel. Although we have shown
only a header for a frame, we will see in future chapters that a frame may have both a
header and a trailer. Different data-link layers have different formats for framing.

A packet at the data-link layer is normally called a frame.

Flow Control
Whenever we have a producer and a consumer, we need to think about flow control. If
the producer produces items that cannot be consumed, accumulation of items occurs.
The sending data-link layer at the end of a link is a producer of frames; the receiving
data-link layer at the other end of a link is a consumer. If the rate of produced frames is
higher than the rate of consumed frames, frames at the receiving end need to be buff-
ered while waiting to be consumed (processed). Definitely, we cannot have an unlim-
ited buffer size at the receiving side. We have two choices. The first choice is to let the
receiving data-link layer drop the frames if its buffer is full. The second choice is to let
the receiving data-link layer send a feedback to the sending data-link layer to ask it to
stop or slow down. Different data-link-layer protocols use different strategies for flow
control. Since flow control also occurs at the transport layer, with a higher degree of
importance, we discuss this issue in Chapter 23 when we talk about the transport layer.
Error Control
At the sending node, a frame in a data-link layer needs to be changed to bits, trans-
formed to electromagnetic signals, and transmitted through the transmission media. At
the receiving node, electromagnetic signals are received, transformed to bits, and put
together to create a frame. Since electromagnetic signals are susceptible to error, a
frame is susceptible to error. The error needs first to be detected. After detection, it
needs to be either corrected at the receiver node or discarded and retransmitted by the
sending node. Since error detection and correction is an issue in every layer (node-to-
node or host-to-host), we have dedicated all of Chapter 10 to this issue.
Congestion Control
Although a link may be congested with frames, which may result in frame loss, most
data-link-layer protocols do not directly use a congestion control to alleviate congestion,
although some wide-area networks do. In general, congestion control is considered an
issue in the network layer or the transport layer because of its end-to-end nature. We will
discuss congestion control in the network layer and the transport layer in later chapters.

9.1.3 Two Categories of Links


Although two nodes are physically connected by a transmission medium such as cable
or air, we need to remember that the data-link layer controls how the medium is used.
We can have a data-link layer that uses the whole capacity of the medium; we can also
242 PART III DATA-LINK LAYER

have a data-link layer that uses only part of the capacity of the link. In other words, we
can have a point-to-point link or a broadcast link. In a point-to-point link, the link is
dedicated to the two devices; in a broadcast link, the link is shared between several
pairs of devices. For example, when two friends use the traditional home phones to
chat, they are using a point-to-point link; when the same two friends use their cellular
phones, they are using a broadcast link (the air is shared among many cell phone users).

9.1.4 Two Sublayers


To better understand the functionality of and the services provided by the link layer, we
can divide the data-link layer into two sublayers: data link control (DLC) and media
access control (MAC). This is not unusual because, as we will see in later chapters, LAN
protocols actually use the same strategy. The data link control sublayer deals with all
issues common to both point-to-point and broadcast links; the media access control sub-
layer deals only with issues specific to broadcast links. In other words, we separate these
two types of links at the data-link layer, as shown in Figure 9.4.

Figure 9.4 Dividing the data-link layer into two sublayers


Data-link layer

Data-link layer

Data link control sublayer Data link control sublayer

Media access control sublayer

a. Data-link layer of a broadcast link b. Data-link layer of a point-to-point link

We discuss the DLC and MAC sublayers later, each in a separate chapter. In addi-
tion, we discuss the issue of error detection and correction, a duty of the data-link and
other layers, also in a separate chapter.

9.2 LINK-LAYER ADDRESSING


The next issue we need to discuss about the data-link layer is the link-layer addresses.
In Chapter 18, we will discuss IP addresses as the identifiers at the network layer that
define the exact points in the Internet where the source and destination hosts are con-
nected. However, in a connectionless internetwork such as the Internet we cannot make
a datagram reach its destination using only IP addresses. The reason is that each data-
gram in the Internet, from the same source host to the same destination host, may take a
different path. The source and destination IP addresses define the two ends but cannot
define which links the datagram should pass through.
We need to remember that the IP addresses in a datagram should not be changed. If
the destination IP address in a datagram changes, the packet never reaches its
destination; if the source IP address in a datagram changes, the destination host or a
router can never communicate with the source if a response needs to be sent back or an
error needs to be reported back to the source (see ICMP in Chapter 19).
CHAPTER 9 INTRODUCTION TO DATA-LINK LAYER 243

The above discussion shows that we need another addressing mechanism in a con-
nectionless internetwork: the link-layer addresses of the two nodes. A link-layer
address is sometimes called a link address, sometimes a physical address, and some-
times a MAC address. We use these terms interchangeably in this book.
Since a link is controlled at the data-link layer, the addresses need to belong to the
data-link layer. When a datagram passes from the network layer to the data-link layer,
the datagram will be encapsulated in a frame and two data-link addresses are added to
the frame header. These two addresses are changed every time the frame moves from
one link to another. Figure 9.5 demonstrates the concept in a small internet.

Figure 9.5 IP addresses and link-layer addresses in a small internet

To another
link
N3 L3

Frame
Alice L2 L1 N1 N8 Data N2 L2 R1 N 4 L4
N1 L1

Link 1

Data
Order of addresses
N: IP address

Frame
IP addresses: source-destination

N1 N8
Legend
L: Link-layer address Link-layer address: destination-source

L4
Link 3

L5
N 8 L8 R2
N7 L7 N5 L5

Link 2
L8 L7 N1 N 8 Data
Bob
Frame
N6 L6
To another
network

In the internet in Figure 9.5, we have three links and two routers. We also have
shown only two hosts: Alice (source) and Bob (destination). For each host, we have
shown two addresses, the IP addresses (N) and the link-layer addresses (L). Note
that a router has as many pairs of addresses as the number of links the router is con-
nected to. We have shown three frames, one in each link. Each frame carries the
same datagram with the same source and destination addresses (N1 and N8), but the
link-layer addresses of the frame change from link to link. In link 1, the link-layer
addresses are L1 and L2. In link 2, they are L4 and L5. In link 3, they are L7 and L8.
Note that the IP addresses and the link-layer addresses are not in the same order. For
IP addresses, the source address comes before the destination address; for link-layer
addresses, the destination address comes before the source. The datagrams and
244 PART III DATA-LINK LAYER

frames are designed in this way, and we follow the design. We may raise several
questions:
❑ If the IP address of a router does not appear in any datagram sent from a source to a
destination, why do we need to assign IP addresses to routers? The answer is that in
some protocols a router may act as a sender or receiver of a datagram. For example,
in routing protocols we will discuss in Chapters 20 and 21, a router is a sender or a
receiver of a message. The communications in these protocols are between routers.
❑ Why do we need more than one IP address in a router, one for each interface? The
answer is that an interface is a connection of a router to a link. We will see that an
IP address defines a point in the Internet at which a device is connected. A router
with n interfaces is connected to the Internet at n points. This is the situation of a
house at the corner of a street with two gates; each gate has the address related to
the corresponding street.
❑ How are the source and destination IP addresses in a packet determined? The
answer is that the host should know its own IP address, which becomes the source
IP address in the packet. As we will discuss in Chapter 26, the application layer
uses the services of DNS to find the destination address of the packet and passes it
to the network layer to be inserted in the packet.
❑ How are the source and destination link-layer addresses determined for each link?
Again, each hop (router or host) should know its own link-layer address, as we dis-
cuss later in the chapter. The destination link-layer address is determined by using
the Address Resolution Protocol, which we discuss shortly.
❑ What is the size of link-layer addresses? The answer is that it depends on the protocol
used by the link. Although we have only one IP protocol for the whole Internet, we
may be using different data-link protocols in different links. This means that we can
define the size of the address when we discuss different link-layer protocols.
9.2.1 Three Types of addresses
Some link-layer protocols define three types of addresses: unicast, multicast, and
broadcast.
Unicast Address
Each host or each interface of a router is assigned a unicast address. Unicasting means
one-to-one communication. A frame with a unicast address destination is destined only
for one entity in the link.
Example 9.1
As we will see in Chapter 13, the unicast link-layer addresses in the most common LAN, Ether-
net, are 48 bits (six bytes) that are presented as 12 hexadecimal digits separated by colons; for
example, the following is a link-layer address of a computer.
A3:34:45:11:92:F1

Multicast Address
Some link-layer protocols define multicast addresses. Multicasting means one-to-many
communication. However, the jurisdiction is local (inside the link).
CHAPTER 9 INTRODUCTION TO DATA-LINK LAYER 245

Example 9.2
As we will see in Chapter 13, the multicast link-layer addresses in the most common LAN,
Ethernet, are 48 bits (six bytes) that are presented as 12 hexadecimal digits separated by colons.
The second digit, however, needs to be an even number in hexadecimal. The following shows a
multicast address:
A2:34:45:11:92:F1

Broadcast Address
Some link-layer protocols define a broadcast address. Broadcasting means one-to-all
communication. A frame with a destination broadcast address is sent to all entities in
the link.

Example 9.3
As we will see in Chapter 13, the broadcast link-layer addresses in the most common LAN,
Ethernet, are 48 bits, all 1s, that are presented as 12 hexadecimal digits separated by colons. The
following shows a broadcast address:
FF:FF:FF:FF:FF:FF

9.2.2 Address Resolution Protocol (ARP)


Anytime a node has an IP datagram to send to another node in a link, it has the IP address
of the receiving node. The source host knows the IP address of the default router. Each
router except the last one in the path gets the IP address of the next router by using its for-
warding table. The last router knows the IP address of the destination host. However, the
IP address of the next node is not helpful in moving a frame through a link; we need the
link-layer address of the next node. This is the time when the Address Resolution Proto-
col (ARP) becomes helpful. The ARP protocol is one of the auxiliary protocols defined
in the network layer, as shown in Figure 9.6. It belongs to the network layer, but we dis-
cuss it in this chapter because it maps an IP address to a logical-link address. ARP accepts
an IP address from the IP protocol, maps the address to the corresponding link-layer
address, and passes it to the data-link layer.

Figure 9.6 Position of ARP in TCP/IP protocol suite

ICMP IGMP IP
address
Network
layer IP

ARP

Link-layer
address
246 PART III DATA-LINK LAYER

Anytime a host or a router needs to find the link-layer address of another host or router
in its network, it sends an ARP request packet. The packet includes the link-layer and IP
addresses of the sender and the IP address of the receiver. Because the sender does not
know the link-layer address of the receiver, the query is broadcast over the link using the
link-layer broadcast address, which we discuss for each protocol later (see Figure 9.7).

Figure 9.7 ARP operation

LAN
System A System B
N1 L1 N 2 L2
Request

N 4 L4 N3 L 3
Request:
Looking for link-layer
address of a node with
IP address N2
a. ARP request is broadcast

LAN

System A N2 L2 System B
N1 L1
Reply

N4 L4 N3 L 3 Reply:
I am the node and my
link-layer address is
L2
b. ARP reply is unicast

Every host or router on the network receives and processes the ARP request
packet, but only the intended recipient recognizes its IP address and sends back an ARP
response packet. The response packet contains the recipient’s IP and link-layer
addresses. The packet is unicast directly to the node that sent the request packet.
In Figure 9.7a, the system on the left (A) has a packet that needs to be delivered
to another system (B) with IP address N2. System A needs to pass the packet to its
data-link layer for the actual delivery, but it does not know the physical address of
the recipient. It uses the services of ARP by asking the ARP protocol to send a
broadcast ARP request packet to ask for the physical address of a system with an IP
address of N2.
This packet is received by every system on the physical network, but only system B
will answer it, as shown in Figure 9.7b. System B sends an ARP reply packet that
includes its physical address. Now system A can send all the packets it has for this des-
tination using the physical address it received.
CHAPTER 9 INTRODUCTION TO DATA-LINK LAYER 247

Caching
A question that is often asked is this: If system A can broadcast a frame to find the link-
layer address of system B, why can’t system A send the datagram for system B using a
broadcast frame? In other words, instead of sending one broadcast frame (ARP
request), one unicast frame (ARP response), and another unicast frame (for sending the
datagram), system A can encapsulate the datagram and send it to the network. System B
receives it and keep it; other systems discard it.
To answer the question, we need to think about the efficiency. It is probable that
system A has more than one datagram to send to system B in a short period of time. For
example, if system B is supposed to receive a long e-mail or a long file, the data do not
fit in one datagram.
Let us assume that there are 20 systems connected to the network (link): system A,
system B, and 18 other systems. We also assume that system A has 10 datagrams to
send to system B in one second.
a. Without using ARP, system A needs to send 10 broadcast frames. Each of the
18 other systems need to receive the frames, decapsulate the frames, remove
the datagram and pass it to their network-layer to find out the datagrams do
not belong to them.This means processing and discarding 180 broadcast
frames.
b. Using ARP, system A needs to send only one broadcast frame. Each of the 18
other systems need to receive the frames, decapsulate the frames, remove the
ARP message and pass the message to their ARP protocol to find that the frame
must be discarded. This means processing and discarding only 18 (instead of
180) broadcast frames. After system B responds with its own data-link address,
system A can store the link-layer address in its cache memory. The rest of the
nine frames are only unicast. Since processing broadcast frames is expensive
(time consuming), the first method is preferable.
Packet Format
Figure 9.8 shows the format of an ARP packet. The names of the fields are self-
explanatory. The hardware type field defines the type of the link-layer protocol; Ethernet
is given the type 1. The protocol type field defines the network-layer protocol: IPv4 pro-
tocol is (0800)16. The source hardware and source protocol addresses are variable-length
fields defining the link-layer and network-layer addresses of the sender. The destination
hardware address and destination protocol address fields define the receiver link-layer
and network-layer addresses. An ARP packet is encapsulated directly into a data-link
frame. The frame needs to have a field to show that the payload belongs to the ARP and
not to the network-layer datagram.

Example 9.4
A host with IP address N1 and MAC address L1 has a packet to send to another host with IP address
N2 and physical address L2 (which is unknown to the first host). The two hosts are on the same net-
work. Figure 9.9 shows the ARP request and response messages.
248 PART III DATA-LINK LAYER

Figure 9.8 ARP packet

0 8 16 31
Hardware Type Protocol Type Hardware: LAN or WAN protocol
Protocol: Network-layer protocol
Hardware Protocol Operation
length length Request:1, Reply:2
Source hardware address

Source protocol address


Destination hardware address
(Empty in request)
Destination protocol address

Figure 9.9 Example 9.4

System A System B

N1 N2
L1 L2 (Not known by A)

0x0001 0x0800
0x06 0x04 0x0001
ARP request L1
N1
All 0s
N2
Multicast frame
From A to B
M A Data 1
Destination Source.

B: Broadcast address 0x0001 0x0800


0x06 0x04 0x0002
L2
ARP reply N2
L1
N2
Unicast frame
From B to A
2 A B Data
Destination Source

9.2.3 An Example of Communication


To show how communication is done at the data-link layer and how link-layer addresses
are found, let us go through a simple example. Assume Alice needs to send a datagram
to Bob, who is three nodes away in the Internet. How Alice finds the network-layer
address of Bob is what we discover in Chapter 26 when we discuss DNS. For the
moment, assume that Alice knows the network-layer (IP) address of Bob. In other
words, Alice’s host is given the data to be sent, the IP address of Bob, and the
CHAPTER 11

Data Link
Control (DLC)

A s we discussed in Chapter 9, the data-link layer is divided into two sublayers. In


this chapter, we discuss the upper sublayer of the data-link layer (DLC). The lower
sublayer, multiple access control (MAC) will be discussed in Chapter 12. We have
already discussed error detection and correction, an issue that is encountered in several
layers, in Chapter 10.
This chapter is divided into four sections.
❑ The first section discusses the general services provided by the DLC sublayer. It
first describes framing and two types of frames used in this sublayer. The section
then discusses flow and error control. Finally, the section explains that a DLC pro-
tocol can be either connectionless or connection-oriented.
❑ The second section discusses some simple and common data-link protocols that
are implemented at the DLC sublayer. The section first describes the Simple Proto-
col. It then explains the Stop-and-Wait Protocol.
❑ The third section introduces HDLC, a protocol that is the basis of all common
data-link protocols in use today such as PPP and Ethernet. The section first talks
about configurations and transfer modes. It then describes framing and three differ-
ent frame formats used in this protocol.
❑ The fourth section discusses PPP, a very common protocol for point-to-point
access. It first introduces the services provided by the protocol. The section also
describes the format of the frame in this protocol. It then describes the transition
mode in the protocol using an FSM. The section finally explains multiplexing in
PPP.

293
294 PART III DATA-LINK LAYER

11.1 DLC SERVICES


The data link control (DLC) deals with procedures for communication between two
adjacent nodes—node-to-node communication—no matter whether the link is dedi-
cated or broadcast. Data link control functions include framing and flow and error
control. In this section, we first discuss framing, or how to organize the bits that are
carried by the physical layer. We then discuss flow and error control.
11.1.1 Framing
Data transmission in the physical layer means moving bits in the form of a signal from
the source to the destination. The physical layer provides bit synchronization to ensure
that the sender and receiver use the same bit durations and timing. We discussed the
physical layer in Part II of the book.
The data-link layer, on the other hand, needs to pack bits into frames, so that each
frame is distinguishable from another. Our postal system practices a type of framing.
The simple act of inserting a letter into an envelope separates one piece of information
from another; the envelope serves as the delimiter. In addition, each envelope defines
the sender and receiver addresses, which is necessary since the postal system is a many-
to-many carrier facility.
Framing in the data-link layer separates a message from one source to a destination
by adding a sender address and a destination address. The destination address defines
where the packet is to go; the sender address helps the recipient acknowledge the
receipt.
Although the whole message could be packed in one frame, that is not normally
done. One reason is that a frame can be very large, making flow and error control very
inefficient. When a message is carried in one very large frame, even a single-bit error
would require the retransmission of the whole frame. When a message is divided into
smaller frames, a single-bit error affects only that small frame.
Frame Size
Frames can be of fixed or variable size. In fixed-size framing, there is no need for defin-
ing the boundaries of the frames; the size itself can be used as a delimiter. An example
of this type of framing is the ATM WAN, which uses frames of fixed size called cells.
We discuss ATM in Chapter 14.
Our main discussion in this chapter concerns variable-size framing, prevalent in
local-area networks. In variable-size framing, we need a way to define the end of one
frame and the beginning of the next. Historically, two approaches were used for this
purpose: a character-oriented approach and a bit-oriented approach.
Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters
from a coding system such as ASCII (see Appendix A). The header, which normally
carries the source and destination addresses and other control information, and the
trailer, which carries error detection redundant bits, are also multiples of 8 bits. To
separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame. The flag, composed of protocol-dependent special characters, signals the
CHAPTER 11 DATA LINK CONTROL (DLC) 295

start or end of a frame. Figure 11.1 shows the format of a frame in a character-oriented
protocol.

Figure 11.1 A frame in a character-oriented protocol

Data from upper layer


Variable number of characters
Flag Header ••• Trailer Flag

Character-oriented framing was popular when only text was exchanged by the
data-link layers. The flag could be selected to be any character not used for text com-
munication. Now, however, we send other types of information such as graphs, audio,
and video; any character used for the flag could also be part of the information. If this
happens, the receiver, when it encounters this pattern in the middle of the data, thinks it
has reached the end of the frame. To fix this problem, a byte-stuffing strategy was
added to character-oriented framing. In byte stuffing (or character stuffing), a special
byte is added to the data section of the frame when there is a character with the same
pattern as the flag. The data section is stuffed with an extra byte. This byte is usually
called the escape character (ESC) and has a predefined bit pattern. Whenever the
receiver encounters the ESC character, it removes it from the data section and treats the
next character as data, not as a delimiting flag. Figure 11.2 shows the situation.

Figure 11.2 Byte stuffing and unstuffing

Data from upper layer


Flag ESC

Sent frame Stuffed


Flag Header ESC Flag ESC ESC Trailer Flag

Extra Extra
byte byte

Flag Header ESC Flag ESC ESC Trailer Flag

Received frame Unstuffed


Flag ESC
Data to upper layer

Byte stuffing is the process of adding one extra byte whenever


there is a flag or escape character in the text.

Byte stuffing by the escape character allows the presence of the flag in the data
section of the frame, but it creates another problem. What happens if the text contains
one or more escape characters followed by a byte with the same pattern as the flag? The
296 PART III DATA-LINK LAYER

receiver removes the escape character, but keeps the next byte, which is incorrectly
interpreted as the end of the frame. To solve this problem, the escape characters that are
part of the text must also be marked by another escape character. In other words, if the
escape character is part of the text, an extra one is added to show that the second one is
part of the text.
Character-oriented protocols present another problem in data communications.
The universal coding systems in use today, such as Unicode, have 16-bit and 32-bit
characters that conflict with 8-bit characters. We can say that, in general, the tendency
is moving toward the bit-oriented protocols that we discuss next.
Bit-Oriented Framing
In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by
the upper layer as text, graphic, audio, video, and so on. However, in addition to headers
(and possible trailers), we still need a delimiter to separate one frame from the other. Most
protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the begin-
ning and the end of the frame, as shown in Figure 11.3.

Figure 11.3 A frame in a bit-oriented protocol

Data from upper layer


Flag Variable number of bits Flag
01111110 Header 01111010110 • • • 11011110 Trailer 01111110

This flag can create the same type of problem we saw in the character-oriented
protocols. That is, if the flag pattern appears in the data, we need to somehow inform
the receiver that this is not the end of the frame. We do this by stuffing 1 single bit
(instead of 1 byte) to prevent the pattern from looking like a flag. The strategy is called
bit stuffing. In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra
0 is added. This extra stuffed bit is eventually removed from the data by the receiver.
Note that the extra bit is added after one 0 followed by five 1s regardless of the value of
the next bit. This guarantees that the flag field sequence does not inadvertently appear
in the frame.

Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0
in the data, so that the receiver does not mistake the pattern 0111110 for a flag.

Figure 11.4 shows bit stuffing at the sender and bit removal at the receiver. Note that
even if we have a 0 after five 1s, we still stuff a 0. The 0 will be removed by the receiver.
This means that if the flaglike pattern 01111110 appears in the data, it will change
to 011111010 (stuffed) and is not mistaken for a flag by the receiver. The real flag
01111110 is not stuffed by the sender and is recognized by the receiver.
CHAPTER 11 DATA LINK CONTROL (DLC) 297

Figure 11.4 Bit stuffing and unstuffing

Data from upper layer


0001111111001111101000

Stuffed
Frame sent
Flag Header 000111110110011111001000 Trailer Flag

Two extra
Frame received bits
Flag Header 000111110110011111001000 Trailer Flag

Unstuffed

0001111111001111101000
Data to upper layer

11.1.2 Flow and Error Control


We briefly defined flow and error control in Chapter 9; we elaborate on these two
issues here. One of the responsibilities of the data-link control sublayer is flow and
error control at the data-link layer.
Flow Control
Whenever an entity produces items and another entity consumes them, there should be
a balance between production and consumption rates. If the items are produced faster
than they can be consumed, the consumer can be overwhelmed and may need to discard
some items. If the items are produced more slowly than they can be consumed, the con-
sumer must wait, and the system becomes less efficient. Flow control is related to the
first issue. We need to prevent losing the data items at the consumer site.
In communication at the data-link layer, we are dealing with four entities: network
and data-link layers at the sending node and network and data-link layers at the receiv-
ing node. Although we can have a complex relationship with more than one producer
and consumer (as we will see in Chapter 23), we ignore the relationships between net-
works and data-link layers and concentrate on the relationship between two data-link
layers, as shown in Figure 11.5.

Figure 11.5 Flow control at the data-link layer

Sending node Receiving node


Data-link Frames are pushed Data-link
Producer Producer
layer layer

Flow control
298 PART III DATA-LINK LAYER

The figure shows that the data-link layer at the sending node tries to push frames
toward the data-link layer at the receiving node. If the receiving node cannot process
and deliver the packet to its network at the same rate that the frames arrive, it becomes
overwhelmed with frames. Flow control in this case can be feedback from the receiving
node to the sending node to stop or slow down pushing frames.
Buffers
Although flow control can be implemented in several ways, one of the solutions is nor-
mally to use two buffers; one at the sending data-link layer and the other at the receiv-
ing data-link layer. A buffer is a set of memory locations that can hold packets at the
sender and receiver. The flow control communication can occur by sending signals
from the consumer to the producer. When the buffer of the receiving data-link layer is
full, it informs the sending data-link layer to stop pushing frames.

Example 11.1
The above discussion requires that the consumers communicate with the producers on two
occasions: when the buffer is full and when there are vacancies. If the two parties use a buffer
with only one slot, the communication can be easier. Assume that each data-link layer uses one
single memory slot to hold a frame. When this single slot in the receiving data-link layer is
empty, it sends a note to the network layer to send the next frame.

Error Control
Since the underlying technology at the physical layer is not fully reliable, we need to
implement error control at the data-link layer to prevent the receiving node from deliver-
ing corrupted packets to its network layer. Error control at the data-link layer is normally
very simple and implemented using one of the following two methods. In both methods, a
CRC is added to the frame header by the sender and checked by the receiver.
❑ In the first method, if the frame is corrupted, it is silently discarded; if it is not cor-
rupted, the packet is delivered to the network layer. This method is used mostly in
wired LANs such as Ethernet.
❑ In the second method, if the frame is corrupted, it is silently discarded; if it is not
corrupted, an acknowledgment is sent (for the purpose of both flow and error con-
trol) to the sender.
Combination of Flow and Error Control
Flow and error control can be combined. In a simple situation, the acknowledgment that
is sent for flow control can also be used for error control to tell the sender the packet has
arrived uncorrupted. The lack of acknowledgment means that there is a problem in the
sent frame. We show this situation when we discuss some simple protocols in the next
section. A frame that carries an acknowledgment is normally called an ACK to distin-
guish it from the data frame.

11.1.3 Connectionless and Connection-Oriented


A DLC protocol can be either connectionless or connection-oriented. We will discuss
this issue very briefly here, but we return to this topic in the network and transport
layer.
CHAPTER 11 DATA LINK CONTROL (DLC) 299

Connectionless Protocol
In a connectionless protocol, frames are sent from one node to the next without any
relationship between the frames; each frame is independent. Note that the term connec-
tionless here does not mean that there is no physical connection (transmission medium)
between the nodes; it means that there is no connection between frames. The frames are
not numbered and there is no sense of ordering. Most of the data-link protocols for
LANs are connectionless protocols.
Connection-Oriented Protocol
In a connection-oriented protocol, a logical connection should first be established
between the two nodes (setup phase). After all frames that are somehow related to each
other are transmitted (transfer phase), the logical connection is terminated (teardown
phase). In this type of communication, the frames are numbered and sent in order. If
they are not received in order, the receiver needs to wait until all frames belonging to the
same set are received and then deliver them in order to the network layer. Connection-
oriented protocols are rare in wired LANs, but we can see them in some point-to-point
protocols, some wireless LANs, and some WANs.

11.2 DATA-LINK LAYER PROTOCOLS


Traditionally four protocols have been defined for the data-link layer to deal with flow
and error control: Simple, Stop-and-Wait, Go-Back-N, and Selective-Repeat. Although
the first two protocols still are used at the data-link layer, the last two have disap-
peared. We therefore briefly discuss the first two protocols in this chapter, in which we
need to understand some wired and wireless LANs. We postpone the discussion of all
four, in full detail, to Chapter 23, where we discuss the transport layer.
The behavior of a data-link-layer protocol can be better shown as a finite state
machine (FSM). An FSM is thought of as a machine with a finite number of states.
The machine is always in one of the states until an event occurs. Each event is associ-
ated with two reactions: defining the list (possibly empty) of actions to be performed
and determining the next state (which can be the same as the current state). One of the
states must be defined as the initial state, the state in which the machine starts when it
turns on. In Figure 11.6, we show an example of a machine using FSM. We have used
rounded-corner rectangles to show states, colored text to show events, and regular black
text to show actions. A horizontal line is used to separate the event from the actions,
although later we replace the horizontal line with a slash. The arrow shows the move-
ment to the next state.
The figure shows a machine with three states. There are only three possible events
and three possible actions. The machine starts in state I. If event 1 occurs, the machine
performs actions 1 and 2 and moves to state II. When the machine is in state II, two
events may occur. If event 1 occurs, the machine performs action 3 and remains in the
same state, state II. If event 3 occurs, the machine performs no action, but move to
state I.
300 PART III DATA-LINK LAYER

Figure 11.6 Connectionless and connection-oriented service represented as FSMs

Event 1
Note: Action 1.
The colored Action 2.
arrow shows the
starting state.
State I State II Event 2
Action 3.

Event 3

11.2.1 Simple Protocol


Our first protocol is a simple protocol with neither flow nor error control. We assume that
the receiver can immediately handle any frame it receives. In other words, the receiver
can never be overwhelmed with incoming frames. Figure 11.7 shows the layout for this
protocol.

Figure 11.7 Simple protocol

Frame
Network Network

Data-link Data-link
Logical link
Sending node Receiving node

The data-link layer at the sender gets a packet from its network layer, makes a
frame out of it, and sends the frame. The data-link layer at the receiver receives a frame
from the link, extracts the packet from the frame, and delivers the packet to its network
layer. The data-link layers of the sender and receiver provide transmission services for
their network layers.
FSMs
The sender site should not send a frame until its network layer has a message to send.
The receiver site cannot deliver a message to its network layer until a frame arrives. We
can show these requirements using two FSMs. Each FSM has only one state, the ready
state. The sending machine remains in the ready state until a request comes from the
process in the network layer. When this event occurs, the sending machine encapsulates
the message in a frame and sends it to the receiving machine. The receiving machine
remains in the ready state until a frame arrives from the sending machine. When this
event occurs, the receiving machine decapsulates the message out of the frame and
delivers it to the process at the network layer. Figure 11.8 shows the FSMs for the sim-
ple protocol. We’ll see more in Chapter 23, which uses this protocol.
CHAPTER 11 DATA LINK CONTROL (DLC) 301

Figure 11.8 FSMs for the simple protocol

Packet came from network layer. Frame arrived.


Make a frame and send it. Deliver the packet to network layer.

Ready Ready
Start Start

Sending node Receiving node

Example 11.2
Figure 11.9 shows an example of communication using this protocol. It is very simple. The
sender sends frames one after another without even thinking about the receiver.

Figure 11.9 Flow diagram for Example 11.2

Sending node Receiving node


Network Data-link Data-link Network

Packet
Frame
Packet
Packet
Frame
Packet

Time Time Time Time

11.2.2 Stop-and-Wait Protocol


Our second protocol is called the Stop-and-Wait protocol, which uses both flow and
error control. We show a primitive version of this protocol here, but we discuss the
more sophisticated version in Chapter 23 when we have learned about sliding windows.
In this protocol, the sender sends one frame at a time and waits for an acknowledg-
ment before sending the next one. To detect corrupted frames, we need to add a CRC
(see Chapter 10) to each data frame. When a frame arrives at the receiver site, it is
checked. If its CRC is incorrect, the frame is corrupted and silently discarded. The
silence of the receiver is a signal for the sender that a frame was either corrupted or lost.
Every time the sender sends a frame, it starts a timer. If an acknowledgment arrives
before the timer expires, the timer is stopped and the sender sends the next frame (if it
has one to send). If the timer expires, the sender resends the previous frame, assuming
that the frame was either lost or corrupted. This means that the sender needs to keep
a copy of the frame until its acknowledgment arrives. When the corresponding
302 PART III DATA-LINK LAYER

acknowledgment arrives, the sender discards the copy and sends the next frame if it is
ready. Figure 11.10 shows the outline for the Stop-and-Wait protocol. Note that only
one frame and one acknowledgment can be in the channels at any time.

Figure 11.10 Stop-and-Wait protocol

Sending node Frame ACK Receiving node


Network Network
CRC CRC

Data-link Data-link

Logical link (duplex)


Timer

FSMs
Figure 11.11 shows the FSMs for our primitive Stop-and-Wait protocol.

Figure 11.11 FSM for the Stop-and-Wait protocol

Sending node
Packet came from network layer.
Make a frame, save a copy, and send the frame. Time-out.
Start the timer. Resend the saved frame.
Restart the timer.
Ready Blocking

Corrupted ACK arrived.


Start Error-free ACK arrived. Discard the ACK.
Stop the timer.
Discard the saved frame.

Receiving node

Corrupted frame arrived.


Error-free frame arrived.
Discard the frame.
Extract and deliver the packet to network layer.
Ready Send ACK.
Start

We describe the sender and receiver states below.


Sender States
The sender is initially in the ready state, but it can move between the ready and block-
ing state.
CHAPTER 11 DATA LINK CONTROL (DLC) 303

❑ Ready State. When the sender is in this state, it is only waiting for a packet from
the network layer. If a packet comes from the network layer, the sender creates a
frame, saves a copy of the frame, starts the only timer and sends the frame. The
sender then moves to the blocking state.
❑ Blocking State. When the sender is in this state, three events can occur:
a. If a time-out occurs, the sender resends the saved copy of the frame and restarts
the timer.
b. If a corrupted ACK arrives, it is discarded.
c. If an error-free ACK arrives, the sender stops the timer and discards the saved
copy of the frame. It then moves to the ready state.
Receiver
The receiver is always in the ready state. Two events may occur:
a. If an error-free frame arrives, the message in the frame is delivered to the net-
work layer and an ACK is sent.
b. If a corrupted frame arrives, the frame is discarded.

Example 11.3
Figure 11.12 shows an example. The first frame is sent and acknowledged. The second frame is
sent, but lost. After time-out, it is resent. The third frame is sent and acknowledged, but the
acknowledgment is lost. The frame is resent. However, there is a problem with this scheme. The
network layer at the receiver site receives two copies of the third packet, which is not right. In the
next section, we will see how we can correct this problem using sequence numbers and acknowl-
edgment numbers.
Sequence and Acknowledgment Numbers
We saw a problem in Example 11.3 that needs to be addressed and corrected. Duplicate packets,
as much as corrupted packets, need to be avoided. As an example, assume we are ordering some
item online. If each packet defines the specification of an item to be ordered, duplicate packets
mean ordering an item more than once. To correct the problem in Example 11.3, we need to add
sequence numbers to the data frames and acknowledgment numbers to the ACK frames. How-
ever, numbering in this case is very simple. Sequence numbers are 0, 1, 0, 1, 0, 1, . . . ; the
acknowledgment numbers can also be 1, 0, 1, 0, 1, 0, … In other words, the sequence numbers
start with 0, the acknowledgment numbers start with 1. An acknowledgment number always
defines the sequence number of the next frame to receive.

Example 11.4
Figure 11.13 shows how adding sequence numbers and acknowledgment numbers can prevent
duplicates. The first frame is sent and acknowledged. The second frame is sent, but lost. After
time-out, it is resent. The third frame is sent and acknowledged, but the acknowledgment is lost.
The frame is resent.

FSMs with Sequence and Acknowledgment Numbers


We can change the FSM in Figure 11.11 to include the sequence and acknowledgment
numbers, but we leave this as a problem at the end of the chapter.
304 PART III DATA-LINK LAYER

Figure 11.12 Flow diagram for Example 11.3

Sending node Receiving node


Network Data-link Data-link Network

Packet Frame
Legend
Packet
ACK
Start the timer.

Stop the timer. Packet Frame

Restart a time-out timer. Lost


Frame (resent)
Notes: Packet
A lost frame means
either lost or corrupted.
A lost ACK means either Packet Frame Packet
lost or corrupted.
ACK
Lost
Frame (resent)
Packet
ACK Duplicate

Time Time Time Time

11.2.3 Piggybacking
The two protocols we discussed in this section are designed for unidirectional commu-
nication, in which data is flowing only in one direction although the acknowledgment
may travel in the other direction. Protocols have been designed in the past to allow data
to flow in both directions. However, to make the communication more efficient, the
data in one direction is piggybacked with the acknowledgment in the other direction. In
other words, when node A is sending data to node B, Node A also acknowledges the
data received from node B. Because piggybacking makes communication at the data-
link layer more complicated, it is not a common practice. We discuss two-way commu-
nication and piggybacking in more detail in Chapter 23.

11.3 HDLC
High-level Data Link Control (HDLC) is a bit-oriented protocol for communication
over point-to-point and multipoint links. It implements the Stop-and-Wait protocol we
discussed earlier. Although this protocol is more a theoretical issue than practical, most
of the concept defined in this protocol is the basis for other practical protocols such as
PPP, which we discuss next, or the Ethernet protocol, which we discuss in wired LANs
(Chapter 13), or in wireless LANs (Chapter 15).
CHAPTER 12

Media Access
Control (MAC)

W hen nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.
The problem of controlling the access to the medium is similar to the rules of speaking in
an assembly. The procedures guarantee that the right to speak is upheld and ensure that
two people do not speak at the same time, do not interrupt each other, do not monopolize
the discussion, and so on. Many protocols have been devised to handle access to a shared
link. All of these protocols belong to a sublayer in the data-link layer called media access
control (MAC). We categorize them into three groups, as shown in Figure 12.1.

Figure 12.1 Taxonomy of multiple-access protocols

Multiple-access
protocols

Random-access Controlled-access Channelization


protocols protocols protocols
ALOHA Reservation FDMA
CSMA Polling TDMA
CSMA/CD Token passing CDMA
CSMA/CA

This chapter is divided into three sections:


❑ The first section discusses random-access protocols. Four protocols, ALOHA,
CSMA, CSMA/CD, and CSMA/CA, are described in this section. These protocols
are mostly used in LANs and WANs, which we discuss in future chapters.
❑ The second section discusses controlled-access protocols. Three protocols, reser-
vation, polling, and token-passing, are described in this section. Some of these pro-
tocols are used in LANs, but others have some historical value.
❑ The third section discusses channelization protocols. Three protocols, FDMA,
TDMA, and CDMA are described in this section. These protocols are used in cel-
lular telephony, which we discuss in Chapter 16.

325
326 PART III DATA-LINK LAYER

12.1 RANDOM ACCESS


In random-access or contention methods, no station is superior to another station and
none is assigned control over another. At each instance, a station that has data to send
uses a procedure defined by the protocol to make a decision on whether or not to send.
This decision depends on the state of the medium (idle or busy). In other words, each
station can transmit when it desires on the condition that it follows the predefined pro-
cedure, including testing the state of the medium.
Two features give this method its name. First, there is no scheduled time for a
station to transmit. Transmission is random among the stations. That is why these
methods are called random access. Second, no rules specify which station should send
next. Stations compete with one another to access the medium. That is why these meth-
ods are also called contention methods.
In a random-access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send, there is
an access conflict—collision—and the frames will be either destroyed or modified. To
avoid access conflict or to resolve it when it happens, each station follows a procedure
that answers the following questions:
❑ When can the station access the medium?
❑ What can the station do if the medium is busy?
❑ How can the station determine the success or failure of the transmission?
❑ What can the station do if there is an access conflict?
The random-access methods we study in this chapter have evolved from a very
interesting protocol known as ALOHA, which used a very simple procedure called mul-
tiple access (MA). The method was improved with the addition of a procedure that
forces the station to sense the medium before transmitting. This was called carrier
sense multiple access (CSMA). This method later evolved into two parallel methods:
carrier sense multiple access with collision detection (CSMA/CD), which tells the station
what to do when a collision is detected, and carrier sense multiple access with collision
avoidance (CSMA/CA), which tries to avoid the collision.
12.1.1 ALOHA
ALOHA, the earliest random access method, was developed at the University of Hawaii
in early 1970. It was designed for a radio (wireless) LAN, but it can be used on any
shared medium.
It is obvious that there are potential collisions in this arrangement. The medium is
shared between the stations. When a station sends data, another station may attempt to
do so at the same time. The data from the two stations collide and become garbled.
Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple but elegant pro-
tocol. The idea is that each station sends a frame whenever it has a frame to send (mul-
tiple access). However, since there is only one channel to share, there is the possibility
of collision between frames from different stations. Figure 12.2 shows an example of
frame collisions in pure ALOHA.
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 327

Figure 12.2 Frames in a pure ALOHA network

Station 1

Station 2

Station 3

Station 4
Time
Collision Collision
duration duration

There are four stations (unrealistic assumption) that contend with one another for
access to the shared channel. The figure shows that each station sends two frames; there
are a total of eight frames on the shared medium. Some of these frames collide because
multiple frames are in contention for the shared channel. Figure 12.2 shows that only
two frames survive: one frame from station 1 and one frame from station 3. We need to
mention that even if one bit of a frame coexists on the channel with one bit from
another frame, there is a collision and both will be destroyed. It is obvious that we need
to resend the frames that have been destroyed during transmission.
The pure ALOHA protocol relies on acknowledgments from the receiver. When a
station sends a frame, it expects the receiver to send an acknowledgment. If the
acknowledgment does not arrive after a time-out period, the station assumes that the
frame (or the acknowledgment) has been destroyed and resends the frame.
A collision involves two or more stations. If all these stations try to resend their
frames after the time-out, the frames will collide again. Pure ALOHA dictates that
when the time-out period passes, each station waits a random amount of time before
resending its frame. The randomness will help avoid more collisions. We call this time
the backoff time TB.
Pure ALOHA has a second method to prevent congesting the channel with retrans-
mitted frames. After a maximum number of retransmission attempts Kmax , a station
must give up and try later. Figure 12.3 shows the procedure for pure ALOHA based on
the above strategy.
The time-out period is equal to the maximum possible round-trip propagation delay,
which is twice the amount of time required to send a frame between the two most widely
separated stations (2 × Tp). The backoff time TB is a random value that normally depends
on K (the number of attempted unsuccessful transmissions). The formula for TB depends
on the implementation. One common formula is the binary exponential backoff. In this
method, for each retransmission, a multiplier R = 0 to 2K − 1 is randomly chosen and mul-
tiplied by Tp (maximum propagation time) or Tfr (the average time required to send out a
frame) to find TB. Note that in this procedure, the range of the random numbers increases
after each collision. The value of Kmax is usually chosen as 15.
328 PART III DATA-LINK LAYER

Figure 12.3 Procedure for pure ALOHA protocol

Station has
Legend a frame to send
K : Number of attempts
K=0
Tp : Maximum propagation time
Tfr: Average transmission time
TB : (Backoff time): R × Tp or R × Tfr
R : (Random number): 0 to 2K – 1 Send the
Wait TB
frame

Choose Wait
R (2 × Tp)

[false] ACK
K > Kmax
K=K+1 received?
[false]
[true] [true]

Abort Success

Example 12.1
The stations on a wireless ALOHA network are a maximum of 600 km apart. If we assume that
signals propagate at 3 × 108 m/s, we find Tp = (600 × 103) / (3 × 108) = 2 ms. For K = 2, the range
of R is {0, 1, 2, 3}. This means that TB can be 0, 2, 4, or 6 ms, based on the outcome of the ran-
dom variable R.
Vulnerable time
Let us find the vulnerable time, the length of time in which there is a possibility of colli-
sion. We assume that the stations send fixed-length frames with each frame taking Tfr sec-
onds to send. Figure 12.4 shows the vulnerable time for station B.

Figure 12.4 Vulnerable time for pure ALOHA protocol

A’s end B’s end


collides with collides with
B’s beginning C’s beginning

A
B
C
Time
t – Tfr t t + Tfr
Vulnerable time = 2 × Tfr

Station B starts to send a frame at time t. Now imagine station A has started to send
its frame after t − Tfr . This leads to a collision between the frames from station B and
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 329

station A. On the other hand, suppose that station C starts to send a frame before time
t + Tfr . Here, there is also a collision between frames from station B and station C.
Looking at Figure 12.4, we see that the vulnerable time during which a collision
may occur in pure ALOHA is 2 times the frame transmission time.
Pure ALOHA vulnerable time 5 2 3 Tfr

Example 12.2
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is the
requirement to make this frame collision-free?

Solution
Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms. The vulnerable time is 2 × 1 ms =
2 ms. This means no station should send later than 1 ms before this station starts transmission and
no station should start sending during the period (1 ms) that this station is sending.
Throughput
Let us call G the average number of frames generated by the system during one frame
transmission time. Then it can be proven that the average number of successfully trans-
mitted frames for pure ALOHA is S = G × e−2G. The maximum throughput Smax is 0.184,
for G = 1/2. (We can find it by setting the derivative of S with respect to G to 0; see Exer-
cises.) In other words, if one-half a frame is generated during one frame transmission
time (one frame during two frame transmission times), then 18.4 percent of these frames
reach their destination successfully. We expect G = 1/2 to produce the maximum through-
put because the vulnerable time is 2 times the frame transmission time. Therefore, if a
station generates only one frame in this vulnerable time (and no other stations generate a
frame during this time), the frame will reach its destination successfully.

The throughput for pure ALOHA is S 5 G 3 e22G.


The maximum throughput Smax 5 1/(2e) 5 0.184 when G 5 (1/2).

Example 12.3
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is the
throughput if the system (all stations together) produces
a. 1000 frames per second?
b. 500 frames per second?
c. 250 frames per second?

Solution
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, or 1 frame per millisecond, then G = 1. In
this case S = G × e−2G = 0.135 (13.5 percent). This means that the throughput is 1000 ×
0.135 = 135 frames. Only 135 frames out of 1000 will probably survive.
b. If the system creates 500 frames per second, or 1/2 frames per millisecond, then G = 1/2.
In this case S = G × e−2G = 0.184 (18.4 percent). This means that the throughput is 500 ×
0.184 = 92 and that only 92 frames out of 500 will probably survive. Note that this is the
maximum throughput case, percentagewise.
330 PART III DATA-LINK LAYER

c. If the system creates 250 frames per second, or 1/4 frames per millisecond, then G = 1/4.
In this case S = G × e−2G = 0.152 (15.2 percent). This means that the throughput is
250 × 0.152 = 38. Only 38 frames out of 250 will probably survive.

Slotted ALOHA
Pure ALOHA has a vulnerable time of 2 × Tfr . This is so because there is no rule that
defines when the station can send. A station may send soon after another station has
started or just before another station has finished. Slotted ALOHA was invented to
improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of Tfr seconds and force the sta-
tion to send only at the beginning of the time slot. Figure 12.5 shows an example of
frame collisions in slotted ALOHA.

Figure 12.5 Frames in a slotted ALOHA network

Collision Collision
duration duration
Station 1

Station 2

Station 3

Station 4
Time
Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6

Because a station is allowed to send only at the beginning of the synchronized time
slot, if a station misses this moment, it must wait until the beginning of the next time
slot. This means that the station which started at the beginning of this slot has already
finished sending its frame. Of course, there is still the possibility of collision if two
stations try to send at the beginning of the same time slot. However, the vulnerable time
is now reduced to one-half, equal to Tfr. Figure 12.6 shows the situation.

Figure 12.6 Vulnerable time for slotted ALOHA protocol

B collides with C

Time
t – Tfr t t + Tfr
Vulnerable time = Tfr
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 331

Slotted ALOHA vulnerable time 5 Tfr


Throughput
It can be proven that the average number of successful transmissions for slotted ALOHA is
S = G × e−G. The maximum throughput Smax is 0.368, when G = 1. In other words, if one
frame is generated during one frame transmission time, then 36.8 percent of these frames
reach their destination successfully. We expect G = 1 to produce maximum throughput
because the vulnerable time is equal to the frame transmission time. Therefore, if a station
generates only one frame in this vulnerable time (and no other station generates a frame
during this time), the frame will reach its destination successfully.

The throughput for slotted ALOHA is S 5 G 3 e2G.


The maximum throughput Smax 5 0.368 when G 5 1.

Example 12.4
A slotted ALOHA network transmits 200-bit frames using a shared channel with a 200-kbps
bandwidth. Find the throughput if the system (all stations together) produces
a. 1000 frames per second.
b. 500 frames per second.
c. 250 frames per second.

Solution
This situation is similar to the previous exercise except that the network is using slotted ALOHA
instead of pure ALOHA. The frame transmission time is 200/200 kbps or 1 ms.
a. In this case G is 1. So S = G × e−G = 0.368 (36.8 percent). This means that the throughput
is 1000 × 0.0368 = 368 frames. Only 368 out of 1000 frames will probably survive. Note
that this is the maximum throughput case, percentagewise.
b. Here G is 1/2. In this case S = G × e−G = 0.303 (30.3 percent). This means that the
throughput is 500 × 0.0303 = 151. Only 151 frames out of 500 will probably survive.
c. Now G is 1/4. In this case S = G × e−G = 0.195 (19.5 percent). This means that the
throughput is 250 × 0.195 = 49. Only 49 frames out of 250 will probably survive.

12.1.2 CSMA
To minimize the chance of collision and, therefore, increase the performance, the
CSMA method was developed. The chance of collision can be reduced if a station
senses the medium before trying to use it. Carrier sense multiple access (CSMA)
requires that each station first listen to the medium (or check the state of the medium)
before sending. In other words, CSMA is based on the principle “sense before transmit”
or “listen before talk.”
CSMA can reduce the possibility of collision, but it cannot eliminate it. The reason
for this is shown in Figure 12.7, a space and time model of a CSMA network. Stations
are connected to a shared channel (usually a dedicated medium).
The possibility of collision still exists because of propagation delay; when a station
sends a frame, it still takes time (although very short) for the first bit to reach every station
and for every station to sense it. In other words, a station may sense the medium and find
it idle, only because the first bit sent by another station has not yet been received.
332 PART III DATA-LINK LAYER

Figure 12.7 Space/time model of a collision in CSMA

B starts C starts
at time t1 at time t2
A B C D

t1
t2
Area where
B’s signal exists

Area where
both signals exist

Area where
Time C’s signal exists Time

At time t1, station B senses the medium and finds it idle, so it sends a frame. At
time t2 (t2 > t1), station C senses the medium and finds it idle because, at this time, the
first bits from station B have not reached station C. Station C also sends a frame. The
two signals collide and both frames are destroyed.
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp. This is the time needed for
a signal to propagate from one end of the medium to the other. When a station sends a
frame and any other station tries to send a frame during this time, a collision will result.
But if the first bit of the frame reaches the end of the medium, every station will already
have heard the bit and will refrain from sending. Figure 12.8 shows the worst case. The
leftmost station, A, sends a frame at time t1, which reaches the rightmost station, D, at
time t1 + Tp. The gray area shows the vulnerable area in time and space.

Figure 12.8 Vulnerable time in CSMA

A B C D

B senses C senses D senses


t1 here here here
Vulnerable time
=
propagation time
Frame propagation

Time Time
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 333

Persistence Methods
What should a station do if the channel is busy? What should a station do if the channel
is idle? Three methods have been devised to answer these questions: the 1-persistent
method, the nonpersistent method, and the p-persistent method. Figure 12.9 shows
the behavior of three persistence methods when a station finds a channel busy.

Figure 12.9 Behavior of three persistence methods

Transmit Transmit
Continuously sense Sense Sense
Wait Wait

Time Time
Busy Busy
a. 1-Persistent b. Nonpersistent

Send if Send if Send if


R < p. R < p. R < p.
Continuously sense Wait a time slot Wait a backoff Wait a time slot
otherwise time otherwise

Time
Busy Busy
c. p-Persistent

Figure 12.10 shows the flow diagrams for these methods.


1-Persistent
The 1-persistent method is simple and straightforward. In this method, after the station
finds the line idle, it sends its frame immediately (with probability 1). This method has
the highest chance of collision because two or more stations may find the line idle and
send their frames immediately. We will see later that Ethernet uses this method.
Nonpersistent
In the nonpersistent method, a station that has a frame to send senses the line. If the line
is idle, it sends immediately. If the line is not idle, it waits a random amount of time and
then senses the line again. The nonpersistent approach reduces the chance of collision
because it is unlikely that two or more stations will wait the same amount of time and
retry to send simultaneously. However, this method reduces the efficiency of the net-
work because the medium remains idle when there may be stations with frames to send.
p-Persistent
The p-persistent method is used if the channel has time slots with a slot duration equal
to or greater than the maximum propagation time. The p-persistent approach combines
the advantages of the other two strategies. It reduces the chance of collision and
improves efficiency. In this method, after the station finds the line idle it follows these
steps:
1. With probability p, the station sends its frame.
334 PART III DATA-LINK LAYER

Figure 12.10 Flow diagram for three persistence methods

Channel Channel
busy? [true] busy? [true] Wait
randomly
[false] [false]

Station Station
can transmit. can transmit.

a. 1-Persistent b. Nonpersistent

Channel
busy? [true]
[false]

Generate a
random number
(R = 0 to 1)

Channel [false] R≤p


busy? [false]
Wait
a slot
[true] [true]

Use backoff process Station


as though collision occurred. can transmit.

c. p-Persistent

2. With probability q = 1 − p, the station waits for the beginning of the next time slot
and checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the back-
off procedure.

12.1.3 CSMA/CD
The CSMA method does not specify the procedure following a collision. Carrier sense
multiple access with collision detection (CSMA/CD) augments the algorithm to
handle the collision.
In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If so, the station is finished. If, however, there is a colli-
sion, the frame is sent again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two
stations involved in the collision. Although each station continues to send bits in the
frame until it detects the collision, we show what happens as the first bits collide. In
Figure 12.11, stations A and C are involved in the collision.
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 335

Figure 12.11 Collision of the first bits in CSMA/CD

A B C D

t1 First bit of
A t2 Transmission
Transmission
time t3 time
C
t4 First bit of C’s collision
A’s collision detection and
detection abortion
and abortion Collision
Time occurs Time

At time t1, station A has executed its persistence procedure and starts sending
the bits of its frame. At time t2, station C has not yet sensed the first bit sent by
A. Station C executes its persistence procedure and starts sending the bits in its
frame, which propagate both to the left and to the right. The collision occurs some-
time after time t2. Station C detects a collision at time t3 when it receives the first
bit of A’s frame. Station C immediately (or after a short time, but we assume imme-
diately) aborts transmission. Station A detects collision at time t4 when it receives
the first bit of C’s frame; it also immediately aborts transmission. Looking at the
figure, we see that A transmits for the duration t4 − t1; C transmits for the duration
t 3 − t2.
Now that we know the time durations for the two transmissions, we can show a
more complete graph in Figure 12.12.

Figure 12.12 Collision and abortion in CSMA/CD

A B C D

Collision
t1 occurs
Transmission t2 Transmission
Part of A’s
time frame t3 time
frame
t4 Part of C’s
A detects
collision and
aborts C detects
collision
Time and aborts Time

Minimum Frame Size


For CSMA/CD to work, we need a restriction on the frame size. Before sending the last
bit of the frame, the sending station must detect a collision, if any, and abort the transmis-
sion. This is so because the station, once the entire frame is sent, does not keep a copy of
336 PART III DATA-LINK LAYER

the frame and does not monitor the line for collision detection. Therefore, the frame trans-
mission time Tfr must be at least two times the maximum propagation time Tp. To under-
stand the reason, let us think about the worst-case scenario. If the two stations involved in
a collision are the maximum distance apart, the signal from the first takes time Tp to reach
the second, and the effect of the collision takes another time TP to reach the first. So the
requirement is that the first station must still be transmitting after 2Tp.

Example 12.5
A network using CSMA/CD has a bandwidth of 10 Mbps. If the maximum propagation time
(including the delays in the devices and ignoring the time needed to send a jamming signal, as we
see later) is 25.6 μs, what is the minimum size of the frame?

Solution
The minimum frame transmission time is Tfr = 2 × Tp = 51.2 μs. This means, in the worst case, a
station needs to transmit for a period of 51.2 μs to detect the collision. The minimum size of the
frame is 10 Mbps × 51.2 μs = 512 bits or 64 bytes. This is actually the minimum size of the frame
for Standard Ethernet, as we will see later in the chapter.

Procedure
Now let us look at the flow diagram for CSMA/CD in Figure 12.13. It is similar to the
one for the ALOHA protocol, but there are differences.

Figure 12.13 Flow diagram for the CSMA/CD

Station has
a frame to send

K=0
Legend
Tfr: Frame average transmission
time
K : Number of attempts Wait TB Apply one of the
R : (random number): 0 to 2K _ 1 seconds persistence methods
TB : (Backoff time) = R × Tfr
Create random [false] Done or
number R Transmit collision?
and receive
[true]
[true]
K < 15 ? Send a [true] Collision
K=K+1 jamming detected?
[false] signal [false]

Abort Success

The first difference is the addition of the persistence process. We need to sense the
channel before we start sending the frame by using one of the persistence processes we
discussed previously (nonpersistent, 1-persistent, or p-persistent). The corresponding
box can be replaced by one of the persistence processes shown in Figure 12.10.
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 337

The second difference is the frame transmission. In ALOHA, we first transmit


the entire frame and then wait for an acknowledgment. In CSMA/CD, transmission
and collision detection are continuous processes. We do not send the entire frame and
then look for a collision. The station transmits and receives continuously and simulta-
neously (using two different ports or a bidirectional port). We use a loop to show that
transmission is a continuous process. We constantly monitor in order to detect one of
two conditions: either transmission is finished or a collision is detected. Either event
stops transmission. When we come out of the loop, if a collision has not been
detected, it means that transmission is complete; the entire frame is transmitted.
Otherwise, a collision has occurred.
The third difference is the sending of a short jamming signal to make sure that all
other stations become aware of the collision.
Energy Level
We can say that the level of energy in a channel can have three values: zero, normal,
and abnormal. At the zero level, the channel is idle. At the normal level, a station has
successfully captured the channel and is sending its frame. At the abnormal level, there
is a collision and the level of the energy is twice the normal level. A station that has a
frame to send or is sending a frame needs to monitor the energy level to determine if the
channel is idle, busy, or in collision mode. Figure 12.14 shows the situation.

Figure 12.14 Energy level during transmission, idleness, or collision

Energy Collision

Frame transmission Frame transmission Time


Idle

Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA. The max-
imum throughput occurs at a different value of G and is based on the persistence
method and the value of p in the p-persistent approach. For the 1-persistent method, the
maximum throughput is around 50 percent when G = 1. For the nonpersistent method,
the maximum throughput can go up to 90 percent when G is between 3 and 8.
Traditional Ethernet
One of the LAN protocols that used CSMA/CD is the traditional Ethernet with the data
rate of 10 Mbps. We discuss the Ethernet LANs in Chapter 13, but it is good to know
that the traditional Ethernet was a broadcast LAN that used the 1-persistence method to
control access to the common media. Later versions of Ethernet try to move from
CSMA/CD access methods for the reason that we discuss in Chapter 13.
338 PART III DATA-LINK LAYER

12.1.4 CSMA/CA
Carrier sense multiple access with collision avoidance (CSMA/CA) was invented
for wireless networks. Collisions are avoided through the use of CSMA/CA’s three
strategies: the interframe space, the contention window, and acknowledgments, as
shown in Figure 12.15. We discuss RTS and CTS frames later.

Figure 12.15 Flow diagram of CSMA/CA

Station has
a frame to send

K=0
Legend
K: Number of attempts
TB: Backoff time Channel free?
IFS: Interframe Space
RTS: Request to send [false] [true] Carrier sense
CTS: Clear to send
Wait IFS

Choose a random number


Contention
R between 0 and 2K − 1
window
and use the Rth slot

Send RTS
Wait TB
seconds Set a timer

CTS received
[false]
before time-out?
[true]
Wait IFS

Send
Transmission
the frame

Set a timer
[true]
K < limit ? [false]
ACK received
K=K+1 before time-out?
[false] [true]

Abort Success

❑ Interframe Space (IFS). First, collisions are avoided by deferring transmission even
if the channel is found idle. When an idle channel is found, the station does not send
immediately. It waits for a period of time called the interframe space or IFS. Even
though the channel may appear idle when it is sensed, a distant station may have
already started transmitting. The distant station’s signal has not yet reached this
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 339

station. The IFS time allows the front of the transmitted signal by the distant station to
reach this station. After waiting an IFS time, if the channel is still idle, the station can
send, but it still needs to wait a time equal to the contention window (described next).
The IFS variable can also be used to prioritize stations or frame types. For example, a
station that is assigned a shorter IFS has a higher priority.
❑ Contention Window. The contention window is an amount of time divided into
slots. A station that is ready to send chooses a random number of slots as its wait
time. The number of slots in the window changes according to the binary exponen-
tial backoff strategy. This means that it is set to one slot the first time and then dou-
bles each time the station cannot detect an idle channel after the IFS time. This is
very similar to the p-persistent method except that a random outcome defines the
number of slots taken by the waiting station. One interesting point about the con-
tention window is that the station needs to sense the channel after each time slot.
However, if the station finds the channel busy, it does not restart the process; it just
stops the timer and restarts it when the channel is sensed as idle. This gives priority
to the station with the longest waiting time. See Figure 12.16.

Figure 12.16 Contention window

Size:
Found
binary exponential
idle
Continuously sense
IFS

Busy Contention window Time

❑ Acknowledgment. With all these precautions, there still may be a collision resulting
in destroyed data. In addition, the data may be corrupted during the transmission.
The positive acknowledgment and the time-out timer can help guarantee that the
receiver has received the frame.
Frame Exchange Time Line
Figure 12.17 shows the exchange of data and control frames in time.
1. Before sending a frame, the source station senses the medium by checking the
energy level at the carrier frequency.
a. The channel uses a persistence strategy with backoff until the channel is idle.
b. After the station is found to be idle, the station waits for a period of time called
the DCF interframe space (DIFS); then the station sends a control frame called
the request to send (RTS).
2. After receiving the RTS and waiting a period of time called the short interframe
space (SIFS), the destination station sends a control frame, called the clear to
send (CTS), to the source station. This control frame indicates that the destination
station is ready to receive data.
340 PART III DATA-LINK LAYER

Figure 12.17 CSMA/CA and NAV

Source Destination All other stations


A B C D

•••
DIFS
RTS
SIFS
CTS CTS
SIFS

Data NAV

SIFS
ACK ACK

Time Time Time Time

3. The source station sends data after waiting an amount of time equal to SIFS.
4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is
needed in this protocol because the station does not have any means to check for
the successful arrival of its data at the destination. On the other hand, the lack of
collision in CSMA/CD is a kind of indication to the source that data have
arrived.
Network Allocation Vector
How do other stations defer sending their data if one station acquires access? In other
words, how is the collision avoidance aspect of this protocol accomplished? The key is
a feature called NAV.
When a station sends an RTS frame, it includes the duration of time that it needs to
occupy the channel. The stations that are affected by this transmission create a timer
called a network allocation vector (NAV) that shows how much time must pass before
these stations are allowed to check the channel for idleness. Each time a station
accesses the system and sends an RTS frame, other stations start their NAV. In other
words, each station, before sensing the physical medium to see if it is idle, first checks
its NAV to see if it has expired. Figure 12.17 shows the idea of NAV.
Collision During Handshaking
What happens if there is a collision during the time when RTS or CTS control frames
are in transition, often called the handshaking period? Two or more stations may try to
send RTS frames at the same time. These control frames may collide. However,
because there is no mechanism for collision detection, the sender assumes there has
been a collision if it has not received a CTS frame from the receiver. The backoff strat-
egy is employed, and the sender tries again.
CHAPTER 12 MEDIA ACCESS CONTROL (MAC) 341

Hidden-Station Problem
The solution to the hidden station problem is the use of the handshake frames (RTS and
CTS). Figure 12.17 also shows that the RTS message from B reaches A, but not C.
However, because both B and C are within the range of A, the CTS message, which
contains the duration of data transmission from B to A, reaches C. Station C knows that
some hidden station is using the channel and refrains from transmitting until that dura-
tion is over.
CSMA/CA and Wireless Networks
CSMA/CA was mostly intended for use in wireless networks. The procedure described
above, however, is not sophisticated enough to handle some particular issues related to
wireless networks, such as hidden terminals or exposed terminals. We will see how
these issues are solved by augmenting the above protocol with handshaking features.
The use of CSMA/CA in wireless networks will be discussed in Chapter 15.

12.2 CONTROLLED ACCESS


In controlled access, the stations consult one another to find which station has the right
to send. A station cannot send unless it has been authorized by other stations. We dis-
cuss three controlled-access methods.

12.2.1 Reservation
In the reservation method, a station needs to make a reservation before sending data.
Time is divided into intervals. In each interval, a reservation frame precedes the data
frames sent in that interval.
If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station. When a station needs to send a
data frame, it makes a reservation in its own minislot. The stations that have made res-
ervations can send their data frames after the reservation frame.
Figure 12.18 shows a situation with five stations and a five-minislot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In the sec-
ond interval, only station 1 has made a reservation.

Figure 12.18 Reservation access method

Direction of packet movement

54321 54321 54321


Data Data Data Data
00000 00001 01101
station 1 station 4 station 3 station 1
Reservation
frame
CHAPTER 13

Wired LANs: Ethernet

A fter discussing the general issues related to the data-link layer in Chapters 9 to 12,
it is time in this chapter to discuss the wired LANs. Although over a few decades
many wired LAN protocols existed, only the Ethernet technology survives today. This
is the reason that we discuss only this technology and its evolution in this chapter.
This chapter is divided into five sections.
❑ The first section discusses the Ethernet protocol in general. It explains that IEEE
Project 802 defines the LLC and MAC sublayers for all LANs including Ethernet.
The section also lists the four generations of Ethernet.
❑ The second section discusses the Standard Ethernet. Although this generation is
rarely seen in practice, most of the characteristics have been inherited by the fol-
lowing three generations. The section first describes some characteristics of the
Standard Ethernet. It then discusses the addressing mechanism, which is the same
in all Ethernet generations. The section next discusses the access method, CSMA/
CD, which we discussed in Chapter 12. The section then reviews the efficiency of
the Standard Ethernet. It then shows the encoding and the implementation of this
generation. Before closing the section, the changes in this generation that resulted
in the move to the next generation are listed.
❑ The third section describes the Fast Ethernet, the second generation, which can still
be seen in many places. The section first describes the changes in the MAC sub-
layer. The section then discusses the physical layer and the implementation of this
generation.
❑ The fourth section discusses the Gigabit Ethernet, with the rate of 1 gigabit per
second. The section first describes the MAC sublayer. It then moves to the physical
layer and implementation.
❑ The fifth section touches on the 10 Gigabit Ethernet. This is a new technology that
can be used both for a backbone LAN or as a MAN (metropolitan area network).
The section briefly describes the rationale and the implementation.

361
362 PART III DATA-LINK LAYER

13.1 ETHERNET PROTOCOL


In Chapter 1, we mentioned that the TCP/IP protocol suite does not define any protocol
for the data-link or the physical layer. In other words, TCP/IP accepts any protocol at
these two layers that can provide services to the network layer. The data-link layer and
the physical layer are actually the territory of the local and wide area networks. This
means that when we discuss these two layers, we are talking about networks that are
using them. As we see in this and the following two chapters, we can have wired or
wireless networks. We discuss wired networks in this chapter and the next and post-
pone the discussion of wireless networks to Chapter 15.
In Chapter 1, we learned that a local area network (LAN) is a computer network
that is designed for a limited geographic area such as a building or a campus. Although
a LAN can be used as an isolated network to connect computers in an organization for
the sole purpose of sharing resources, most LANs today are also linked to a wide area
network (WAN) or the Internet.
In the 1980s and 1990s several different types of LANs were used. All of these
LANs used a media-access method to solve the problem of sharing the media. The
Ethernet used the CSMA/CD approach. The Token Ring, Token Bus, and FDDI (Fiber
Distribution Data Interface) used the token-passing approach. During this period,
another LAN technology, ATM LAN, which deployed the high speed WAN technology
(ATM), appeared in the market.
Almost every LAN except Ethernet has disappeared from the marketplace because
Ethernet was able to update itself to meet the needs of the time. Several reasons for this
success have been mentioned in the literature, but we believe that the Ethernet protocol
was designed so that it could evolve with the demand for higher transmission rates. It is
natural that an organization that has used an Ethernet LAN in the past and now needs a
higher data rate would update to the new generation instead of switching to another
technology, which might cost more. This means that we confine our discussion of
wired LANs to the discussion of Ethernet.

13.1.1 IEEE Project 802


Before we discuss the Ethernet protocol and all its generations, we need to briefly discuss
the IEEE standard that we often encounter in text or real life. In 1985, the Computer Soci-
ety of the IEEE started a project, called Project 802, to set standards to enable intercom-
munication among equipment from a variety of manufacturers. Project 802 does not seek
to replace any part of the OSI model or TCP/IP protocol suite. Instead, it is a way of speci-
fying functions of the physical layer and the data-link layer of major LAN protocols.
The relationship of the 802 Standard to the TCP/IP protocol suite is shown in
Figure 13.1. The IEEE has subdivided the data-link layer into two sublayers: logical
link control (LLC) and media access control (MAC). IEEE has also created several
physical-layer standards for different LAN protocols.
Logical Link Control (LLC)
Earlier we discussed data link control. We said that data link control handles framing,
flow control, and error control. In IEEE Project 802, flow control, error control, and
CHAPTER 13 WIRED LANs: ETHERNET 363

Figure 13.1 IEEE standard for LANs

LLC: Logical link control MAC: Media access control

LLC
Data-link layer
Ethernet Token Ring Token Bus
•••
MAC MAC MAC

Ethernet Token Ring Token Bus


Physical layer physical physical physical •••
layer layer layer
Transmission media Transmission media
OSI or TCP/IP Suite IEEE Standard

part of the framing duties are collected into one sublayer called the logical link control
(LLC). Framing is handled in both the LLC sublayer and the MAC sublayer.
The LLC provides a single link-layer control protocol for all IEEE LANs. This
means LLC protocol can provide interconnectivity between different LANs because it
makes the MAC sublayer transparent.
Media Access Control (MAC)
Earlier we discussed multiple access methods including random access, controlled
access, and channelization. IEEE Project 802 has created a sublayer called media
access control that defines the specific access method for each LAN. For example, it
defines CSMA/CD as the media access method for Ethernet LANs and defines the
token-passing method for Token Ring and Token Bus LANs. As we mentioned in the
previous section, part of the framing function is also handled by the MAC layer.

13.1.2 Ethernet Evolution


The Ethernet LAN was developed in the 1970s by Robert Metcalfe and David Boggs.
Since then, it has gone through four generations: Standard Ethernet (10 Mbps), Fast
Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), and 10 Gigabit Ethernet
(10 Gbps), as shown in Figure 13.2. We briefly discuss all these generations.

Figure 13.2 Ethernet evolution through four generations

Ethernet
evolution

Standard Fast Gigabit 10 Gigabit


Ethernet Ethernet Ethernet Ethernet
10 Mbps 100 Mbps 1 Gbps 10 Gbps
364 PART III DATA-LINK LAYER

13.2 STANDARD ETHERNET


We refer to the original Ethernet technology with the data rate of 10 Mbps as the Stan-
dard Ethernet. Although most implementations have moved to other technologies in
the Ethernet evolution, there are some features of the Standard Ethernet that have not
changed during the evolution. We discuss this standard version to pave the way for
understanding the other three technologies.

13.2.1 Characteristics
Let us first discuss some characteristics of the Standard Ethernet.
Connectionless and Unreliable Service
Ethernet provides a connectionless service, which means each frame sent is independent
of the previous or next frame. Ethernet has no connection establishment or connection
termination phases. The sender sends a frame whenever it has it; the receiver may or may
not be ready for it. The sender may overwhelm the receiver with frames, which may result
in dropping frames. If a frame drops, the sender will not know about it. Since IP, which is
using the service of Ethernet, is also connectionless, it will not know about it either. If the
transport layer is also a connectionless protocol, such as UDP, the frame is lost and
salvation may only come from the application layer. However, if the transport layer is
TCP, the sender TCP does not receive acknowledgment for its segment and sends it again.
Ethernet is also unreliable like IP and UDP. If a frame is corrupted during trans-
mission and the receiver finds out about the corruption, which has a high level of prob-
ability of happening because of the CRC-32, the receiver drops the frame silently. It is
the duty of high-level protocols to find out about it.
Frame Format
The Ethernet frame contains seven fields, as shown in Figure 13.3.

Figure 13.3 Ethernet frame

Minimum payload length: 46 bytes


Preamble: 56 bits of alternating 1s and 0s Maximum payload length: 1500 bytes
SFD: Start frame delimiter, flag (10101011)
S Destination Source
Preamble F Type Data and padding CRC
D address address
7 bytes 1 byte 6 bytes 6 bytes 2 bytes 4 bytes
Minimum frame length: 512 bits or 64 bytes
Physical-layer
Maximum frame length: 12,144 bits or 1518 bytes
header

❑ Preamble. This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert the
receiving system to the coming frame and enable it to synchronize its clock if it’s out
of synchronization. The pattern provides only an alert and a timing pulse. The 56-bit
CHAPTER 13 WIRED LANs: ETHERNET 365

pattern allows the stations to miss some bits at the beginning of the frame. The pream-
ble is actually added at the physical layer and is not (formally) part of the frame.
❑ Start frame delimiter (SFD). This field (1 byte: 10101011) signals the beginning
of the frame. The SFD warns the station or stations that this is the last chance for
synchronization. The last 2 bits are (11)2 and alert the receiver that the next field is
the destination address. This field is actually a flag that defines the beginning of
the frame. We need to remember that an Ethernet frame is a variable-length frame.
It needs a flag to define the beginning of the frame. The SFD field is also added at
the physical layer.
❑ Destination address (DA). This field is six bytes (48 bits) and contains the link-
layer address of the destination station or stations to receive the packet. We will
discuss addressing shortly. When the receiver sees its own link-layer address, or a
multicast address for a group that the receiver is a member of, or a broadcast
address, it decapsulates the data from the frame and passes the data to the upper-
layer protocol defined by the value of the type field.
❑ Source address (SA). This field is also six bytes and contains the link-layer address
of the sender of the packet. We will discuss addressing shortly.
❑ Type. This field defines the upper-layer protocol whose packet is encapsulated in
the frame. This protocol can be IP, ARP, OSPF, and so on. In other words, it serves
the same purpose as the protocol field in a datagram and the port number in a seg-
ment or user datagram. It is used for multiplexing and demultiplexing.
❑ Data. This field carries data encapsulated from the upper-layer protocols. It is a
minimum of 46 and a maximum of 1500 bytes. We discuss the reason for these
minimum and maximum values shortly. If the data coming from the upper layer is
more than 1500 bytes, it should be fragmented and encapsulated in more than one
frame. If it is less than 46 bytes, it needs to be padded with extra 0s. A padded
data frame is delivered to the upper-layer protocol as it is (without removing the
padding), which means that it is the responsibility of the upper layer to remove
or, in the case of the sender, to add the padding. The upper-layer protocol needs
to know the length of its data. For example, a datagram has a field that defines the
length of the data.
❑ CRC. The last field contains error detection information, in this case a CRC-32. The
CRC is calculated over the addresses, types, and data field. If the receiver calculates
the CRC and finds that it is not zero (corruption in transmission), it discards the frame.
Frame Length
Ethernet has imposed restrictions on both the minimum and maximum lengths of a frame.
The minimum length restriction is required for the correct operation of CSMA/CD, as
we will see shortly. An Ethernet frame needs to have a minimum length of 512 bits or
64 bytes. Part of this length is the header and the trailer. If we count 18 bytes of header
and trailer (6 bytes of source address, 6 bytes of destination address, 2 bytes of length
or type, and 4 bytes of CRC), then the minimum length of data from the upper layer is
64 − 18 = 46 bytes. If the upper-layer packet is less than 46 bytes, padding is added to
make up the difference.
366 PART III DATA-LINK LAYER

The standard defines the maximum length of a frame (without preamble and SFD
field) as 1518 bytes. If we subtract the 18 bytes of header and trailer, the maximum
length of the payload is 1500 bytes. The maximum length restriction has two historical
reasons. First, memory was very expensive when Ethernet was designed; a maximum
length restriction helped to reduce the size of the buffer. Second, the maximum length
restriction prevents one station from monopolizing the shared medium, blocking other
stations that have data to send.

Minimum frame length: 64 bytes Minimum data length: 46 bytes


Maximum frame length: 1518 bytes Maximum data length: 1500 bytes

13.2.2 Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own
network interface card (NIC). The NIC fits inside the station and provides the station
with a link-layer address. The Ethernet address is 6 bytes (48 bits), normally written in
hexadecimal notation, with a colon between the bytes. For example, the following
shows an Ethernet MAC address:
4A:30:10:21:10:1A

Transmission of Address Bits


The way the addresses are sent out online is different from the way they are written in
hexadecimal notation. The transmission is left to right, byte by byte; however, for each
byte, the least significant bit is sent first and the most significant bit is sent last. This
means that the bit that defines an address as unicast or multicast arrives first at the
receiver. This helps the receiver to immediately know if the packet is unicast or multicast.

Example 13.1
Show how the address 47:20:1B:2E:08:EE is sent out online.

Solution
The address is sent left to right, byte by byte; for each byte, it is sent right to left, bit by bit, as
shown below:
Hexadecimal 47 20 1B 2E 08 EE
Binary 01000111 00100000 00011011 00101110 00001000 11101110
Transmitted ← 11100010 00000100 11011000 01110100 00010000 01110111

Unicast, Multicast, and Broadcast Addresses


A source address is always a unicast address—the frame comes from only one station.
The destination address, however, can be unicast, multicast, or broadcast. Figure 13.4
shows how to distinguish a unicast address from a multicast address. If the least signif-
icant bit of the first byte in a destination address is 0, the address is unicast; otherwise,
it is multicast.
Note that with the way the bits are transmitted, the unicast/multicast bit is the first
bit which is transmitted or received. The broadcast address is a special case of the
CHAPTER 13 WIRED LANs: ETHERNET 367

Figure 13.4 Unicast and multicast addresses

Unicast: 0 Multicast: 1

•••
Byte 1 Byte 2 Byte 6

multicast address: the recipients are all the stations on the LAN. A broadcast destina-
tion address is forty-eight 1s.

Example 13.2
Define the type of the following destination addresses:
a. 4A:30:10:21:10:1A
b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF

Solution
To find the type of the address, we need to look at the second hexadecimal digit from the left. If it
is even, the address is unicast. If it is odd, the address is multicast. If all digits are Fs, the address
is broadcast. Therefore, we have the following:
a. This is a unicast address because A in binary is 1010 (even).
b. This is a multicast address because 7 in binary is 0111 (odd).
c. This is a broadcast address because all digits are Fs in hexadecimal.
Distinguish Between Unicast, Multicast, and Broadcast Transmission
Standard Ethernet uses a coaxial cable (bus topology) or a set of twisted-pair cables
with a hub (star topology) as shown in Figure 13.5.
We need to know that transmission in the standard Ethernet is always broadcast, no
matter if the intention is unicast, multicast, or broadcast. In the bus topology, when sta-
tion A sends a frame to station B, all stations will receive it. In the star topology, when
station A sends a frame to station B, the hub will receive it. Since the hub is a passive
element, it does not check the destination address of the frame; it regenerates the bits (if
they have been weakened) and sends them to all stations except station A. In fact, it
floods the network with the frame.
The question is, then, how the actual unicast, multicast, and broadcast transmis-
sions are distinguished from each other. The answer is in the way the frames are kept or
dropped.
❑ In a unicast transmission, all stations will receive the frame, the intended recipient
keeps and handles the frame; the rest discard it.
❑ In a multicast transmission, all stations will receive the frame, the stations that are
members of the group keep and handle it; the rest discard it.
368 PART III DATA-LINK LAYER

Figure 13.5 Implementation of standard Ethernet

A B C D E F G H

a. A LAN with a bus topology using a coaxial cable


Legend

A B C D A host (of any type)

A hub

A cable tap
A cable end
Hub Coaxial cable
E F G H Twisted pair cable

b. A LAN with a star topology using a hub

❑ In a broadcast transmission, all stations (except the sender) will receive the frame
and all stations (except the sender) keep and handle it.

13.2.3 Access Method


Since the network that uses the standard Ethernet protocol is a broadcast network,
we need to use an access method to control access to the sharing medium. The stan-
dard Ethernet chose CSMA/CD with 1-persistent method, discussed earlier in
Chapter 12, Section 1.3. Let us use a scenario to see how this method works for the
Ethernet protocol.
❑ Assume station A in Figure 13.5 has a frame to send to station D. Station A first
should check whether any other station is sending (carrier sense). Station A mea-
sures the level of energy on the medium (for a short period of time, normally less
than 100 μs). If there is no signal energy on the medium, it means that no station is
sending (or the signal has not reached station A). Station A interprets this situation
as idle medium. It starts sending its frame. On the other hand, if the signal energy
level is not zero, it means that the medium is being used by another station. Station A
continuously monitors the medium until it becomes idle for 100 μs. It then starts
sending the frame. However, station A needs to keep a copy of the frame in its buf-
fer until it is sure that there is no collision. When station A is sure of this is the sub-
ject we discuss next.
❑ The medium sensing does not stop after station A has started sending the frame.
Station A needs to send and receive continuously. Two cases may occur:
CHAPTER 13 WIRED LANs: ETHERNET 369

a. Station A has sent 512 bits and no collision is sensed (the energy level did not
go above the regular energy level), the station then is sure that the frame will go
through and stops sensing the medium. Where does the number 512 bits come
from? If we consider the transmission rate of the Ethernet as 10 Mbps, this
means that it takes the station 512/(10 Mbps) = 51.2 μs to send out 512 bits.
With the speed of propagation in a cable (2 × 108 meters), the first bit could
have gone 10,240 meters (one way) or only 5120 meters (round trip), have col-
lided with a bit from the last station on the cable, and have gone back. In other
words, if a collision were to occur, it should occur by the time the sender has
sent out 512 bits (worst case) and the first bit has made a round trip of 5120
meters. We should know that if the collision happens in the middle of the cable,
not at the end, station A hears the collision earlier and aborts the transmission.
We also need to mention another issue. The above assumption is that the length
of the cable is 5120 meters. The designer of the standard Ethernet actually put a
restriction of 2500 meters because we need to consider the delays encountered
throughout the journey. It means that they considered the worst case. The whole
idea is that if station A does not sense the collision before sending 512 bits,
there must have been no collision, because during this time, the first bit has
reached the end of the line and all other stations know that a station is sending
and refrain from sending. In other words, the problem occurs when another sta-
tion (for example, the last station) starts sending before the first bit of station A
has reached it. The other station mistakenly thinks that the line is free because
the first bit has not yet reached it. The reader should notice that the restriction of
512 bits actually helps the sending station: The sending station is certain that no
collision will occur if it is not heard during the first 512 bits, so it can discard
the copy of the frame in its buffer.
b. Station A has sensed a collision before sending 512 bits. This means that one of
the previous bits has collided with a bit sent by another station. In this case both
stations should refrain from sending and keep the frame in their buffer for
resending when the line becomes available. However, to inform other stations
that there is a collision in the network, the station sends a 48-bit jam signal. The
jam signal is to create enough signal (even if the collision happens after a few
bits) to alert other stations about the collision. After sending the jam signal, the
stations need to increment the value of K (number of attempts). If after incre-
ment K = 15, the experience has shown that the network is too busy, the station
needs to abort its effort and try again. If K < 15, the station can wait a backoff
time (TB in Figure 12.13) and restart the process. As Figure 12.13 shows, the
station creates a random number between 0 and 2K − 1, which means each time
the collision occurs, the range of the random number increases exponentially.
After the first collision (K = 1) the random number is in the range (0, 1). After
the second collision (K = 2) it is in the range (0, 1, 2, 3). After the third collision
(K = 3) it is in the range (0, 1, 2, 3, 4, 5, 6, 7). So after each collision, the proba-
bility increases that the backoff time becomes longer. This is due to the fact that
if the collision happens even after the third or fourth attempt, it means that the
network is really busy; a longer backoff time is needed.
370 PART III DATA-LINK LAYER

13.2.4 Efficiency of Standard Ethernet


The efficiency of the Ethernet is defined as the ratio of the time used by a station to
send data to the time the medium is occupied by this station. The practical efficiency of
standard Ethernet has been measured to be

Efficiency 5 1 / (1 1 6.4 3 a)

in which the parameter “a” is the number of frames that can fit on the medium. It can
be calculated as a = (propagation delay)/(transmission delay) because the transmission
delay is the time it takes a frame of average size to be sent out and the propagation delay
is the time it takes to reach the end of the medium. Note that as the value of parameter a
decreases, the efficiency increases. This means that if the length of the media is shorter
or the frame size longer, the efficiency increases. In the ideal case, a = 0 and the effi-
ciency is 1. We ask to calculate this efficiency in problems at the end of the chapter.

Example 13.3
In the Standard Ethernet with the transmission rate of 10 Mbps, we assume that the length of the
medium is 2500 m and the size of the frame is 512 bits. The propagation speed of a signal in a
cable is normally 2 × 108 m/s.

Propagation delay 5 2500/(2 3 108) 5 12.5 ms Transmission delay 5 512/(107) 5 51.2 ms

a 5 12.5/51.2 5 0.24 Efficiency 5 39%

The example shows that a = 0.24, which means only 0.24 of a frame occupies the whole
medium in this case. The efficiency is 39 percent, which is considered moderate; it means that
only 61 percent of the time the medium is occupied but not used by a station.

13.2.5 Implementation
The Standard Ethernet defined several implementations, but only four of them
became popular during the 1980s. Table 13.1 shows a summary of Standard Ether-
net implementations.
Table 13.1 Summary of Standard Ethernet implementations
Implementation Medium Medium Length Encoding
10Base5 Thick coax 500 m Manchester
10Base2 Thin coax 185 m Manchester
10Base-T 2 UTP 100 m Manchester
10Base-F 2 Fiber 2000 m Manchester

In the nomenclature 10BaseX, the number defines the data rate (10 Mbps), the
term Base means baseband (digital) signal, and X approximately defines either the
maximum size of the cable in 100 meters (for example 5 for 500 or 2 for 185 meters) or
the type of cable, T for unshielded twisted pair cable (UTP) and F for fiber-optic. The
standard Ethernet uses a baseband signal, which means that the bits are changed to a
digital signal and directly sent on the line.
CHAPTER 13 WIRED LANs: ETHERNET 371

Encoding and Decoding


All standard implementations use digital signaling (baseband) at 10 Mbps. At the
sender, data are converted to a digital signal using the Manchester scheme; at the
receiver, the received signal is interpreted as Manchester and decoded into data. As we saw
in Chapter 4, Manchester encoding is self-synchronous, providing a transition at each
bit interval. Figure 13.6 shows the encoding scheme for Standard Ethernet.

Figure 13.6 Encoding in a Standard Ethernet implementation

10 Mbps data 10 Mbps data

Manchester Manchester
encoder decoder

Station

Media

10Base5: Thick Ethernet


The first implementation is called 10Base5, thick Ethernet, or Thicknet. The nick-
name derives from the size of the cable, which is roughly the size of a garden hose
and too stiff to bend with your hands. 10Base5 was the first Ethernet specification to
use a bus topology with an external transceiver (transmitter/receiver) connected via a
tap to a thick coaxial cable. Figure 13.7 shows a schematic diagram of a 10Base5
implementation.

Figure 13.7 10Base5 implementation

10Base5
10 Mbps 500 m
Transceiver cable
maximum 50 m
Baseband Cable Cable
(digital) end end
Transceiver Thick coaxial cable
maximum 500 m

The transceiver is responsible for transmitting, receiving, and detecting collisions.


The transceiver is connected to the station via a transceiver cable that provides separate
paths for sending and receiving. This means that collision can only happen in the
coaxial cable.
The maximum length of the coaxial cable must not exceed 500 m, otherwise, there
is excessive degradation of the signal. If a length of more than 500 m is needed, up to
five segments, each a maximum of 500 meters, can be connected using repeaters.
Repeaters will be discussed in Chapter 17.
372 PART III DATA-LINK LAYER

10Base2: Thin Ethernet


The second implementation is called 10Base2, thin Ethernet, or Cheapernet. 10Base2
also uses a bus topology, but the cable is much thinner and more flexible. The cable can
be bent to pass very close to the stations. In this case, the transceiver is normally part of
the network interface card (NIC), which is installed inside the station. Figure 13.8
shows the schematic diagram of a 10Base2 implementation.

Figure 13.8 10Base2 implementation

Cable
end
10Base2

10 Mbps 185 m

Baseband
(digital)
Thin coaxial cable,
maximum 185 m

Cable
end

Note that the collision here occurs in the thin coaxial cable. This implementation is
more cost effective than 10Base5 because thin coaxial cable is less expensive than thick
coaxial and the tee connections are much cheaper than taps. Installation is simpler
because the thin coaxial cable is very flexible. However, the length of each segment
cannot exceed 185 m (close to 200 m) due to the high level of attenuation in thin coaxial
cable.
10Base-T: Twisted-Pair Ethernet
The third implementation is called 10Base-T or twisted-pair Ethernet. 10Base-T uses a
physical star topology. The stations are connected to a hub via two pairs of twisted
cable, as shown in Figure 13.9.

Figure 13.9 10Base-T implementation

10Base-T

10 Mbps Twisted pair

Baseband Two pairs of


(digital) UTP cable

•••
10Base-T hub
CHAPTER 13 WIRED LANs: ETHERNET 373

Note that two pairs of twisted cable create two paths (one for sending and one for
receiving) between the station and the hub. Any collision here happens in the hub.
Compared to 10Base5 or 10Base2, we can see that the hub actually replaces the coaxial
cable as far as a collision is concerned. The maximum length of the twisted cable here
is defined as 100 m, to minimize the effect of attenuation in the twisted cable.
10Base-F: Fiber Ethernet
Although there are several types of optical fiber 10-Mbps Ethernet, the most common
is called 10Base-F. 10Base-F uses a star topology to connect stations to a hub. The sta-
tions are connected to the hub using two fiber-optic cables, as shown in Figure 13.10.

Figure 13.10 10Base-F implementation

10Base-F
10 Mbps Fiber Two fiber-optic
cables
Baseband
(digital) •••

10Base-F hub

13.2.6 Changes in the Standard


Before we discuss higher-rate Ethernet protocols, we need to discuss the changes that
occurred to the 10-Mbps Standard Ethernet. These changes actually opened the road to
the evolution of the Ethernet to become compatible with other high-data-rate LANs.
Bridged Ethernet
The first step in the Ethernet evolution was the division of a LAN by bridges. Bridges
have two effects on an Ethernet LAN: They raise the bandwidth and they separate colli-
sion domains. We discuss bridges in Chapter 17.
Raising the Bandwidth
In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all sta-
tions with a frame to send; the stations share the bandwidth of the network. If only one
station has frames to send, it benefits from the total capacity (10 Mbps). But if more
than one station needs to use the network, the capacity is shared. For example, if two
stations have a lot of frames to send, they probably alternate in usage. When one station
is sending, the other one refrains from sending. We can say that, in this case, each sta-
tion on average sends at a rate of 5 Mbps. Figure 13.11 shows the situation.
The bridge, as we will learn in Chapter 17, can help here. A bridge divides the net-
work into two or more networks. Bandwidthwise, each network is independent. For
example, in Figure 13.12, a network with 12 stations is divided into two networks, each
with 6 stations. Now each network has a capacity of 10 Mbps. The 10-Mbps capacity in
each segment is now shared between 6 stations (actually 7 because the bridge acts as a
CHAPTER 15

Wireless LANs

W e discussed wired LANs and wired WANs in the two previous chapters. We con-
centrate on wireless LANs in this chapter and wireless WANs in the next.
In this chapter, we cover two types of wireless LANs. The first is the wireless LAN
defined by the IEEE 802.11 project (sometimes called wireless Ethernet); the second is
a personal wireless LAN, Bluetooth, that is sometimes called personal area network or
PAN.
This chapter is divided into three sections:
❑ The first section introduces the general issues behind wireless LANs and compares
wired and wireless networks. The section describes the characteristics of the wire-
less networks and the way access is controlled in these types of networks.
❑ The second section discusses a wireless LAN defined by the IEEE 802.11 Project,
which is sometimes called wireless Ethernet. This section defines the architecture
of this type of LAN and describes the MAC sublayer, which uses the CSMA/CA
access method discussed in Chapter 12. The section then shows the addressing
mechanism used in this network and gives the format of different packets used at
the data-link layer. Finally, the section discusses different physical-layer protocols
that are used by this type of network.
❑ The third section discusses the Bluetooth technology as a personal area network
(PAN). The section describes the architecture of the network, the addressing mech-
anism, and the packet format. Different layers used in this protocol are also briefly
described and compared with the ones in the other wired and wireless LANs.

435
436 PART III DATA-LINK LAYER

15.1 INTRODUCTION
Wireless communication is one of the fastest-growing technologies. The demand for
connecting devices without the use of cables is increasing everywhere. Wireless LANs
can be found on college campuses, in office buildings, and in many public areas. Before
we discuss a specific protocol related to wireless LANs, let us talk about them in
general.

15.1.1 Architectural Comparison


Let us first compare the architecture of wired and wireless LANs to give some idea of
what we need to look for when we study wireless LANs.
Medium
The first difference we can see between a wired and a wireless LAN is the medium. In a
wired LAN, we use wires to connect hosts. In Chapter 7, we saw that we moved from
multiple access to point-to-point access through the generation of the Ethernet. In a
switched LAN, with a link-layer switch, the communication between the hosts is point-
to-point and full-duplex (bidirectional). In a wireless LAN, the medium is air, the signal is
generally broadcast. When hosts in a wireless LAN communicate with each other, they
are sharing the same medium (multiple access). In a very rare situation, we may be able to
create a point-to-point communication between two wireless hosts by using a very limited
bandwidth and two-directional antennas. Our discussion in this chapter, however, is about
the multiple-access medium, which means we need to use MAC protocols.
Hosts
In a wired LAN, a host is always connected to its network at a point with a fixed link-
layer address related to its network interface card (NIC). Of course, a host can move
from one point in the Internet to another point. In this case, its link-layer address remains
the same, but its network-layer address will change, as we see later in Chapter 19, Sec-
tion 19.3 (Mobile IP section). However, before the host can use the services of the Inter-
net, it needs to be physically connected to the Internet. In a wireless LAN, a host is not
physically connected to the network; it can move freely (as we’ll see) and can use the
services provided by the network. Therefore, mobility in a wired network and wireless
network are totally different issues, which we try to clarify in this chapter.
Isolated LANs
The concept of a wired isolated LAN also differs from that of a wireless isolated LAN.
A wired isolated LAN is a set of hosts connected via a link-layer switch (in the recent
generation of Ethernet). A wireless isolated LAN, called an ad hoc network in wireless
LAN terminology, is a set of hosts that communicate freely with each other. The con-
cept of a link-layer switch does not exist in wireless LANs. Figure 15.1 shows two iso-
lated LANs, one wired and one wireless.
Connection to Other Networks
A wired LAN can be connected to another network or an internetwork such as the Inter-
net using a router. A wireless LAN may be connected to a wired infrastructure network,
CHAPTER 15 WIRELESS LANs 437

to a wireless infrastructure network, or to another wireless LAN. The first situation is the
one that we discuss in this section: connection of a wireless LAN to a wired infrastructure
network. Figure 15.2 shows the two environments.

Figure 15.1 Isolated LANs: wired versus wireless

Isolated LAN Ad hoc network

Host Host Host Host


Switch

Host Host Host Host


Wired Wireless

Figure 15.2 Connection of a wired LAN and a wireless LAN to other networks

Wired Infrastructure
internet Access
Switch
point

Wired LAN Infrastructure network

In this case, the wireless LAN is referred to as an infrastructure network, and the
connection to the wired infrastructure, such as the Internet, is done via a device called
an access point (AP). Note that the role of the access point is completely different from
the role of a link-layer switch in the wired environment. An access point is gluing two
different environments together: one wired and one wireless. Communication between
the AP and the wireless host occurs in a wireless environment; communication between
the AP and the infrastructure occurs in a wired environment.
Moving between Environments
The discussion above confirms what we learned in Chapters 2 and 9: a wired LAN or a
wireless LAN operates only in the lower two layers of the TCP/IP protocol suite. This
means that if we have a wired LAN in a building that is connected via a router or a
modem to the Internet, all we need in order to move from the wired environment to a
wireless environment is to change the network interface cards designed for wired envi-
ronments to the ones designed for wireless environments and replace the link-layer
switch with an access point. In this change, the link-layer addresses will change
(because of changing NICs), but the network-layer addresses (IP addresses) will remain
the same; we are moving from wired links to wireless links.
438 PART III DATA-LINK LAYER

15.1.2 Characteristics
There are several characteristics of wireless LANs that either do not apply to wired
LANs or the existence of which is negligible and can be ignored. We discuss some of
these characteristics here to pave the way for discussing wireless LAN protocols.
Attenuation
The strength of electromagnetic signals decreases rapidly because the signal disperses
in all directions; only a small portion of it reaches the receiver. The situation becomes
worse with mobile senders that operate on batteries and normally have small power
supplies.
Interference
Another issue is that a receiver may receive signals not only from the intended sender,
but also from other senders if they are using the same frequency band.
Multipath Propagation
A receiver may receive more than one signal from the same sender because electromag-
netic waves can be reflected back from obstacles such as walls, the ground, or objects.
The result is that the receiver receives some signals at different phases (because they
travel different paths). This makes the signal less recognizable.
Error
With the above characteristics of a wireless network, we can expect that errors and
error detection are more serious issues in a wireless network than in a wired network. If
we think about the error level as the measurement of signal-to-noise ratio (SNR), we
can better understand why error detection and error correction and retransmission are
more important in a wireless network. We discussed SNR in more detail in Chapter 3,
but it is enough to say that it measures the ratio of good stuff to bad stuff (signal to
noise). If SNR is high, it means that the signal is stronger than the noise (unwanted sig-
nal), so we may be able to convert the signal to actual data. On the other hand, when
SNR is low, it means that the signal is corrupted by the noise and the data cannot be
recovered.

15.1.3 Access Control


Maybe the most important issue we need to discuss in a wireless LAN is access
control—how a wireless host can get access to the shared medium (air). We discussed
in Chapter 12 that the Standard Ethernet uses the CSMA/CD algorithm. In this method,
each host contends to access the medium and sends its frame if it finds the medium
idle. If a collision occurs, it is detected and the frame is sent again. Collision detection
in CSMA/CD serves two purposes. If a collision is detected, it means that the frame has
not been received and needs to be resent. If a collision is not detected, it is a kind of
acknowledgment that the frame was received. The CSMA/CD algorithm does not work
in wireless LANs for three reasons:
1. To detect a collision, a host needs to send and receive at the same time (sending the
frame and receiving the collision signal), which means the host needs to work in a
CHAPTER 15 WIRELESS LANs 439

duplex mode. Wireless hosts do not have enough power to do so (the power is
supplied by batteries). They can only send or receive at one time.
2. Because of the hidden station problem, in which a station may not be aware of
another station’s transmission due to some obstacles or range problems, collision
may occur but not be detected. Figure 15.3 shows an example of the hidden station
problem. Station B has a transmission range shown by the left oval (sphere in
space); every station in this range can hear any signal transmitted by station B.
Station C has a transmission range shown by the right oval (sphere in space); every
station located in this range can hear any signal transmitted by C. Station C is

Figure 15.3 Hidden station problem

C
Range of B Range of C

B A C B A

a. Stations B and C are not in each b. Stations B and C are hidden


other’s range. from each other.

outside the transmission range of B; likewise, station B is outside the transmission


range of C. Station A, however, is in the area covered by both B and C; it can hear
any signal transmitted by B or C. The figure also shows that the hidden station
problem may also occur due to an obstacle.
Assume that station B is sending data to station A. In the middle of this trans-
mission, station C also has data to send to station A. However, station C is out of
B’s range and transmissions from B cannot reach C. Therefore C thinks the
medium is free. Station C sends its data to A, which results in a collision at A
because this station is receiving data from both B and C. In this case, we say that
stations B and C are hidden from each other with respect to A. Hidden stations can
reduce the capacity of the network because of the possibility of collision.
3. The distance between stations can be great. Signal fading could prevent a station at
one end from hearing a collision at the other end.
To overcome the above three problems, Carrier Sense Multiple Access with Collision
Avoidance (CSMA/CA) was invented for wireless LANs, which we discussed in
Chapter 12.

15.2 IEEE 802.11 PROJECT


IEEE has defined the specifications for a wireless LAN, called IEEE 802.11, which
covers the physical and data-link layers. It is sometimes called wireless Ethernet. In

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy