0% found this document useful (0 votes)
2 views18 pages

Computer Networks 2

The document discusses Framing and Flow Control Mechanisms in the Data Link Layer of the OSI model, emphasizing their role in ensuring reliable data transfer between nodes. It covers the purpose and components of framing, types of framing, and various flow control protocols, including Stop-and-Wait and Sliding Window protocols. Additionally, it addresses error detection and correction techniques, highlighting methods like Checksum and CRC, and types of transmission errors such as single-bit, multiple-bit, and burst errors.

Uploaded by

ar4462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views18 pages

Computer Networks 2

The document discusses Framing and Flow Control Mechanisms in the Data Link Layer of the OSI model, emphasizing their role in ensuring reliable data transfer between nodes. It covers the purpose and components of framing, types of framing, and various flow control protocols, including Stop-and-Wait and Sliding Window protocols. Additionally, it addresses error detection and correction techniques, highlighting methods like Checksum and CRC, and types of transmission errors such as single-bit, multiple-bit, and burst errors.

Uploaded by

ar4462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

COMPUTER NETWORKS

UNIT-3

Let's delve into Framing and Flow Control Mechanisms, which are crucial
functions of the Data Link Layer (Layer 2)in the OSI model. These
mechanisms ensure reliable and efficient data transfer between two directly
connected nodes.
Framing

Framing is the process of encapsulating network layer packets (or


datagrams) into discrete units called frames for transmission over the
physical medium. It's like putting a letter into an envelope before mailing it.
The data link layer adds a header and a trailer to the network layer packet,
forming a frame.
Purpose of Framing:
● Synchronization: Helps the receiver identify the start and end of a
frame within a continuous stream of bits. Without clear boundaries, the
receiver wouldn't know where one unit of data ends and the next
begins.
● Error Detection/Correction: The trailer often contains error-detection
codes (like CRC - Cyclic Redundancy Check) to allow the receiver to
check if the frame was corrupted during transmission.
● Addressing: The header contains source and destination MAC
addresses, enabling the frame to be delivered to the correct device on
the local network segment.
● Flow Control: The frame structure can include information or flags
that support flow control mechanisms.
● Type/Length Identification: The header may contain a field indicating
the type of network layer protocol (e.g., IPv4, IPv6) or the length of the
data payload.
Components of a Frame:
● Frame Header: Contains control information like:
○ Start Frame Delimiter (SFD)/Flag: A unique bit pattern to
mark the beginning of the frame.
○ Source and Destination MAC Addresses: Physical addresses of
the sender and receiver on the local network.
○ Type/Length Field: Indicates the type of data carried in the
payload (e.g., IPv4, IPv6) or the length of the data.
○ Control Field: Used for flow control and error control (e.g.,
sequence numbers, acknowledgment numbers).
● Payload (Data Field): The actual data being transmitted, typically a
packet from the network layer.
● Frame Trailer: Contains control information like:
○ Frame Check Sequence (FCS)/CRC: Error detection code
calculated by the sender and verified by the receiver.
○ End Frame Delimiter/Flag: A unique bit pattern to mark the
end of the frame.
Types of Framing:
1. Fixed-Size Framing:
○ Frames have a predetermined, constant size.
○ Example: ATM (Asynchronous Transfer Mode) cells are always
53 bytes long (48 bytes data, 5 bytes header).
○ Advantage: Simpler to implement as no explicit delimiters are
needed.
○ Disadvantage: Inefficient if data units are much smaller than the
fixed frame size.
2. Variable-Size Framing:
○ Frames can have different sizes, adapting to the amount of data
being transmitted.
○ Requires mechanisms to define frame boundaries:
■ Length Field: A field in the header explicitly states the
length of the frame.
■ Example: Ethernet (IEEE 802.3) uses a length/type
field.
■ End Delimiter (Flag Bytes/Bit Patterns): Special
characters or bit patterns mark the beginning and end of
frames.
■ Byte Stuffing (Character-Oriented): If the flag
byte appears in the data, an "escape" byte is inserted
before it. The receiver removes the escape byte.
■ Example: PPP (Point-to-Point Protocol) can
use character-oriented framing.
■ Bit Stuffing (Bit-Oriented): If the flag bit pattern
(e.g., 01111110 for HDLC) appears in the data, an
extra 0 bit is inserted after five consecutive 1s. The
receiver removes this stuffed 0.
■ Example: HDLC (High-Level Data Link
Control) and PPP use bit stuffing.
Flow Control Mechanisms

Flow Control is a data link layer function that coordinates the amount of data
a sender can transmit before receiving an acknowledgment from the receiver.
Its main purpose is to prevent a fast sender from overwhelming a slow
receiver, leading to buffer overflow and frame loss at the receiver's end.
Essentially, flow control ensures that the receiver has enough buffer space
and processing capability to handle the incoming data.
Here are the main flow control mechanisms, often combined with error
control (forming ARQ protocols):
1. Stop-and-Wait Protocol (Simplest ARQ)

● The sender transmits one frame and then stops and waits for an
Concept:
acknowledgment (ACK) from the receiver before sending the next
frame.
● Working:
○ Sender sends Frame i.
○ Sender starts a timer.
○ Receiver receives Frame i. If it's valid, the receiver sends ACK
i+1 (acknowledging i and indicating it expects i+1 next).
○ Sender receives ACK i+1. If it's valid and the timer hasn't
expired, the sender stops the timer and sends Frame i+1.
○ Error Handling (Timeouts & Sequence Numbers):
■ If the sender's timer expires before receiving an ACK, it
assumes the frame or the ACK was lost/corrupted and
retransmits Frame i.
■ To handle duplicate frames (e.g., if the ACK was lost and
the sender retransmitted), frames are given alternating
sequence numbers (e.g., 0 and 1). The receiver expects
frames in sequence. If it receives a duplicate (same
sequence number as the last successfully received frame), it
discards it and re-sends the ACK.
● Advantages: Simple to implement.
● Disadvantages:
○ Very Inefficient: Throughput is extremely low, especially over
long distances (high propagation delay) or high bandwidth links,
because the sender spends most of its time waiting.
○ Low Link Utilization: The channel is idle while the sender waits
for an ACK.
2. Sliding Window Protocols

To improve efficiency, Sliding Window Protocols allow a sender to transmit


multiple frames (a "window" of frames) before needing an acknowledgment.
Both the sender and receiver maintain a "window" of acceptable sequence
numbers.
a. Go-Back-N ARQ (Automatic Repeat Request)

● The sender can send up to N frames without waiting for


Concept:
acknowledgments. The receiver, however, only accepts frames in
order. If a frame is lost or corrupted, the sender retransmits that frame
and all subsequent framesthat were sent after it.
● Working:
○ Sender Window: A window of size N (e.g., N=7) representing
frames that can be sent. Frames are numbered sequentially.
○ Receiver Window: A window of size 1 (it only expects one
specific frame at a time).
○ Sender sends frames 0, 1, 2, ..., N-1 without waiting
for ACKs.
○ Cumulative ACKs: The receiver typically sends cumulative
acknowledgments. An ACK for frame kmeans that frame k and
all frames up to k-1 have been successfully received. The ACK
number indicates the next expected frame.
○ Error Handling (Lost/Corrupted Frames):
■ If the receiver receives a frame out of order (e.g., it expects
3 but receives 4), it discards the out-of-order frame (4) and
continues to send an ACK for the last correctly received
in-order frame (e.g., ACK 3, indicating it's still waiting for
3).
■ If the sender's timer for a frame (e.g., Frame 3) expires, it
assumes Frame 3 (or its ACK) was lost. The sender then
retransmits Frame 3 AND all subsequent frames (4, 5,
etc.) that were already sent after 3 and are still
unacknowledged. This is the "Go-Back-N" part.
● Advantages:
○ More efficient than Stop-and-Wait due to pipelining (sending
multiple frames).
○ Simpler receiver logic than Selective Repeat (doesn't need to
buffer out-of-order frames).
● Disadvantages:
○ Inefficient for Noisy Channels: If the link is prone to errors, a
single lost frame can cause a large number of frames to be
retransmitted, even if many of them were received correctly. This
wastes bandwidth.
b. Selective Reject ARQ (Selective Repeat ARQ)

● This is the most efficient of the ARQ protocols in terms of


Concept:
bandwidth utilization. Both the sender and receiver maintain a window
of size N. The receiver can accept and buffer frames that arrive out of
order. Only the specific lost or corrupted frames are retransmitted.
● Working:
○ Sender Window: A window of size N.
○ Receiver Window: A window of size N (must be at least half the
sequence number space to avoid ambiguity).
○ Sender sends frames 0, 1, 2, ..., N-1.
○ Individual ACKs/NAKs:
■ If the receiver receives a frame correctly, it sends an
individual ACK for that specific frame.
■ If the receiver detects a missing or corrupted frame (e.g.,
expects 3 but receives 4), it sends a Negative
Acknowledgment (NAK) for the missing/corrupted frame
(NAK 3). It buffers the out-of-order frames (4, 5, ...).
○ Error Handling:
■ When the sender receives a NAK for a specific frame (e.g.,
NAK 3), or if its timer for a specific frame (e.g., 3) expires,
it only retransmits that specific frame (3).
■ Upon receiving the retransmitted frame, the receiver can
then combine it with its buffered out-of-order frames and
deliver them to the network layer in the correct sequence.
● Advantages:
○ Highly Efficient: Minimizes retransmissions, leading to much
better throughput on noisy channels.
○ Optimal bandwidth utilization.
● Disadvantages:
○ More Complex Implementation: Both sender and receiver logic
are more complex due to the need for selective retransmission,
individual ACKs/NAKs, and buffering out-of-order frames at the
receiver.
○ Requires larger buffers at the receiver.
These framing and flow control mechanisms are fundamental to ensuring
reliable and orderly data delivery over physical links in computer networks.
Data transmission is inherently prone to errors due to noise, interference, and
other imperfections in the communication channel. Error detection and
error correction techniques are vital components of reliable data
communication, primarily implemented at the Data Link and Transport
layers.
Error Detection Codes
These codes add redundant information to the data being transmitted. The
receiver uses this redundant information to detect if any errors have occurred
during transmission. If an error is detected, the receiver typically requests a
retransmission (Backward Error Correction/ARQ).
1. Checksum

● A checksum is a simple error detection method that involves


Concept:
summing up the data being transmitted. The sum is then appended to
the data.
● How it Works (Example: Internet Checksum used by IPv4 and
UDP):
○ Sender Side:
■ The data unit is divided into equal-sized segments (e.g., 16-
bit words).
■ All segments are added together using one's complement
arithmetic.
■ The sum is then complemented (all 0s become 1s, and all
1s become 0s) to produce the checksum.
■ The checksum is appended to the original data and sent.
○ Receiver Side:
■ The received data unit (including the checksum) is divided
into the same sized segments.
■ All segments (including the received checksum) are added
together using one's complement arithmetic.
■ The sum is then complemented.
■ If the final result is all zeros, the data is considered valid. If
it's anything other than all zeros, an error is detected.
● Advantages: Simple to implement and computationally inexpensive.
● Disadvantages:
○ Low Precision: It's not very robust. For example, if two bits flip
in a way that their changes cancel each other out (e.g., a '0'
changes to a '1' in one position, and a '1' changes to a '0' in
another position, maintaining the sum), the checksum might not
detect the error.
○ Limited ability to detect burst errors (multiple consecutive bit
errors).
● Usage: Used in less critical protocols like IPv4 header checksum and
UDP checksum.
2. CRC (Cyclic Redundancy Check)

● CRC is a more powerful and widely used error detection


Concept:
technique based on polynomial division. It treats the data bits as
coefficients of a polynomial and divides this polynomial by a
predetermined generator polynomial. The remainder of this division is
the CRC value.
● How it Works:
○ Sender Side:
■ The data (D) is represented as a polynomial.
■ A generator polynomial (G) (a standard, agreed-upon
polynomial, e.g., CRC-32) is chosen.
■ Zeros are appended to the data polynomial such that its
length becomes (data_length + degree_of_G).
■ The augmented data polynomial is divided by the generator
polynomial using modulo-2 arithmetic (XOR for
addition/subtraction).
■ The remainder of this division is the CRC code (also called
FCS - Frame Check Sequence).
■ The CRC code is appended to the original data, and the
combined data + CRC is transmitted. The resulting
codeword is exactly divisible by the generator polynomial.
○ Receiver Side:
■ The received data (including the CRC) is divided by the
same generator polynomial.
■ If the remainder of this division is zero, the data is
considered error-free.
■ If the remainder is non-zero, an error is detected.
● Advantages:
○ High Sensitivity: Highly effective at detecting a wide range of
common transmission errors, including single-bit errors,
multiple-bit errors, and especially burst errors (errors affecting
consecutive bits).
○ Less likely to miss errors compared to checksum.
○ Mathematically rigorous.
● Disadvantages: More complex to implement than checksums,
requiring more computational resources (though often implemented in
hardware for speed).
● Usage: Widely used in critical data communication protocols (e.g.,
Ethernet, Wi-Fi, USB) and storage devices (e.g., hard drives). Different
standards use different generator polynomials (e.g., CRC-8, CRC-16,
CRC-32).
Types of Errors

Errors in data transmission occur when bits are altered during transit. These
alterations can be caused by various factors like noise, interference, signal
attenuation, or faulty hardware.
1. Single-Bit Error:
○ Description: Only one bit in the data unit (frame/packet) is
flipped from 0 to 1 or 1 to 0.
○ Example: Sent 00100010, Received 00101010 (the 4th bit
from left changed).
○ Cause: Often caused by random noise spikes or brief signal
interruptions.
○ Detection: Relatively easy to detect using simple parity checks,
checksums, or CRCs.
2. Multiple-Bit Error:
○ Description: Two or more non-consecutive bits in a data unit are
flipped.

○ Example: Sent 10101010, Received 11101000 (2nd and 7th


bits flipped).
○ Cause: Similar to single-bit errors but affecting multiple points
in the data.
○ Detection: More complex than single-bit errors. Simple parity
checks might fail. Checksums and CRCs are generally more
effective.
3. Burst Error:
○ Description: Two or more consecutive bits in a data unit are
flipped. The length of the burst is the number of bits from the
first corrupted bit to the last corrupted bit, including any
uncorrupted bits in between.
○ Example: Sent 1010101010, Received 101**1001**010 (a
burst of 4 bits, from 4th to 7th, some inside are also flipped).
○ Cause: Often caused by phenomena that affect the channel for a
short duration, such as impulsive noise, fading in wireless
channels, or faulty hardware.
○ Detection: Burst errors are particularly challenging. CRC is
specifically designed to detect burst errors up to a certain length,
which depends on the degree of the generator polynomial.
Error Correction

Error correction goes beyond merely detecting errors; it aims to identify the
exact location of the corrupted bit(s) and then flip them back to their original
state, thus recovering the original data without retransmission. This requires
more redundancy than just error detection.
There are two main approaches to error correction:
1. Backward Error Correction (BEC) / ARQ (Automatic Repeat
Request):
○ Concept: This is the most common approach in computer
networks. The receiver detects an error (using CRC, checksum,
parity check, etc.) and then sends a negative acknowledgment
(NAK) or simply does not send an acknowledgment (ACK) for
the corrupted frame. This signals the sender to retransmit the
entire data unit.
○ Mechanism: This is what we discussed with Stop-and-Wait, Go-
Back-N, and Selective Reject ARQ protocols. The error detection
mechanism is paired with a retransmission strategy.
○ Advantages:
■ Simpler to implement compared to FEC.
■ Guarantees perfect data recovery (assuming retransmission
is possible).
○ Disadvantages:
■ Introduces latency due to retransmissions.
■ Wastes bandwidth if retransmissions are frequent (noisy
channels).
■ Not suitable for real-time applications (e.g., live
video/audio streaming) where delays are unacceptable.
○ Usage: Common in reliable protocols like TCP, file transfers,
and most data link layer protocols where retransmission is
feasible.
2. Forward Error Correction (FEC):
○ Concept: The sender adds enough redundant information (error-
correcting codes) to the original data such that the receiver can
detect and correct a certain number of errors without requiring
retransmission. The receiver has the intelligence and redundant
bits to fix the errors itself.
○ Mechanism:
■ Encoding: At the sender, data bits are passed through an
encoder that adds redundant bits based on a specific
algorithm (e.g., Hamming codes, Reed-Solomon codes,
convolutional codes). The original data is transformed into
a longer codeword.
■ Decoding: At the receiver, the entire received codeword is
passed through a decoder. If errors occur within the
correction capabilities of the code, the decoder can identify
the corrupted bits and reverse them to reconstruct the
original data.
○ Advantages:
■ No retransmission needed, which means no additional
latency or bandwidth consumption due to errors.
■ Ideal for real-time applications (streaming, voice over IP)
and one-way communication links (e.g., satellite
communication, deep space probes) where retransmission
is impossible or impractical.
○ Disadvantages:
■ Adds significant overhead (more redundant bits must be
sent), reducing the effective data rate.
■ More complex to design and implement.
■ Can only correct errors up to a certain limit; if too many
errors occur, it may fail, requiring a higher-layer
retransmission or data loss.
○ Usage: Used in mobile communications (cellular networks),
satellite communications, deep-space communication, digital
broadcasting (DVB), and storage systems (e.g., RAID, memory
error correction in RAM - ECC RAM).
Example of an FEC code: Hamming Code Hamming codes are a family of
linear error-correcting codes capable of detecting up to two simultaneous bit
errors and correcting single-bit errors. They work by strategically placing
parity bits within the data block, allowing the receiver to not only detect an
error but also identify its exact position.
In summary, error detection focuses on knowing if an error occurred, while
error correction aims to fix the error. ARQ (BEC) is a common error control
strategy that relies on error detection and retransmission, whereas FEC
embeds enough redundancy to allow direct correction at the receiver. The
choice between these methods depends on the characteristics of the
communication channel and the requirements of the application (e.g., latency
tolerance, bandwidth availability).
Let's explore Carrier Sense Multiple Access (CSMA) protocols, focusing
on CSMA/CD, and then two important Data Link Layer protocols: HDLC
and PPP. These are crucial for understanding how devices share a common
medium and how data is reliably transported over point-to-point links.
CSMA (Carrier Sense Multiple Access)

CSMA is a Media Access Control (MAC) protocol that defines how devices
share a single transmission medium (like an Ethernet cable in older networks
or a radio frequency in wireless networks). The core principle of CSMA is
"listen before you speak."
Core Idea: Before transmitting data, a station (device) first listens to the
transmission medium to check if it's currently busy (i.e., if another station is
transmitting).
Mechanism:
1. Carrier Sense: A station listens for a "carrier" signal on the shared
medium. If a carrier is detected, it means the medium is busy.
2. Multiple Access: If the medium is idle, the station can then attempt to
transmit its data. Multiple stations can access the medium.
Problem with Pure CSMA: Collisions Even with carrier sensing, collisions
can still occur. This happens if two or more stations sense the medium as idle
simultaneously (due to propagation delay, where a signal hasn't yet reached
all parts of the network) and begin transmitting at roughly the same time.
When two or more signals overlap on the shared medium, the data becomes
corrupted – a collision.
Different CSMA variants exist based on how they react to a busy medium:
● 1-persistent CSMA: If the medium is idle, transmit with probability 1.
If busy, keep listening and transmit immediately when idle. (High
collision probability)
● Non-persistent CSMA: If the medium is idle, transmit. If busy, wait a
random amount of time and then sense again. (Lower utilization, but
fewer collisions)
● p-persistent CSMA: (For slotted channels) If idle, transmit with
probability p. With probability 1-p, defer to the next slot. If busy,
defer to the next slot.
CSMA/CD (Carrier Sense Multiple Access with Collision Detection)

CSMA/CD is a refinement of CSMA specifically used in wired Ethernet


networks (like 10BASE-T, 100BASE-TX, typically with hubs or in half-
duplex switch connections). It adds a crucial mechanism to detect and handle
collisions.
How it Works (The "Collision Detection" Part):
1. Carrier Sense: A station listens to the medium. If idle, it proceeds. If
busy, it waits until the medium becomes idle.
2. Transmit and Monitor: If the medium is idle, the station begins
transmitting its frame while simultaneously monitoring the medium for
collisions.
3. Collision Detection:
○ If a collision is detected (e.g., by sensing an unexpected voltage
level or signal pattern on the line while transmitting), the
transmitting station immediately stops its transmission.
○ It then transmits a brief jam signal (or jam sequence) to ensure
that all other stations on the segment also detect the collision and
cease transmission.
4. Backoff Algorithm:
○ After transmitting the jam signal, each station involved in the
collision calculates a random backoff time. This random delay
is crucial to prevent the stations from colliding again
immediately.
○ The random backoff is typically determined using a truncated
binary exponential backoff algorithm, where the range of
possible backoff times increases with each subsequent collision
attempt for a particular frame.
5. Retransmission: After the backoff time expires, the station returns to
step 1 (carrier sense) and attempts to retransmit the frame.
Why CSMA/CD for Wired Ethernet (and not wireless):
● Simultaneous Transmission/Reception: In wired networks, it's
generally feasible for a device to transmit and listen for collisions on
the same cable at the same time.
● Collision Domain: CSMA/CD is effective in shared-medium
environments (like those with hubs) where multiple devices share a
single collision domain.
● Obsolete in Modern Switched Ethernet: In modern switched
Ethernet networks, where each device connects to its own port on a
switch and operates in full-duplex mode, CSMA/CD is no longer
necessary. Full-duplex means a device can send and receive
simultaneously on its dedicated link to the switch, effectively
eliminating collisions. However, it's still supported for backward
compatibility and half-duplex connections.
HDLC (High-Level Data Link Control)

HDLC is a bit-oriented, synchronous data link layer protocol developed by


the ISO. It's a foundational protocol that has influenced many other data link
protocols, including PPP. It's designed for reliable data transfer over point-to-
point and multi-point links.
Key Features and Functions:
1. Bit-Oriented Protocol: HDLC treats data as a stream of bits, not
characters. This allows for data transparency, meaning any bit pattern
can be transmitted.
2. Framing (Bit Stuffing): HDLC uses a unique flag byte 01111110
(0x7E in hex) to mark the beginning and end of each frame. To ensure
that this flag sequence never appears in the actual data, HDLC employs
bit stuffing. Whenever five consecutive 1s occur in the data, a 0 bit is
automatically inserted by the sender. The receiver removes this stuffed
0 bit.
3. Error Control (ARQ): HDLC incorporates robust error detection
using CRC (Cyclic Redundancy Check) in its Frame Check Sequence
(FCS) field, typically 16 or 32 bits. It uses sliding window ARQ
mechanisms (like Go-Back-N or Selective Repeat) for reliable delivery
and retransmission of lost or corrupted frames.
4. Flow Control: Utilizes a sliding window mechanism to prevent a fast
sender from overwhelming a slow receiver.
5. Supports Full-Duplex and Half-Duplex: Can operate over both full-
duplex (simultaneous two-way communication) and half-duplex (one-
way at a time) links.
6. Configurable Modes of Operation:
○ Normal Response Mode (NRM): Unbalanced configuration
(one primary station, one or more secondary stations). Secondary
stations can only transmit after receiving permission (a poll) from
the primary.
○ Asynchronous Response Mode (ARM): Unbalanced
configuration. Secondary stations can transmit without explicit
permission from the primary, but the primary is still responsible
for error recovery and link management.
○ Asynchronous Balanced Mode (ABM): Balanced configuration
(two combined stations). Both stations can initiate transmissions
and error recovery independently. This is the most common mode
for point-to-point links.
7. Three Frame Types:
○ Information frames (I-frames): Carry user data and can also
"piggyback" acknowledgment and flow control information.
○ Supervisory frames (S-frames): Used for flow control and error
control (e.g., acknowledgments, ready/not ready signals,
rejections) when there's no data to send.
○ Unnumbered frames (U-frames): Used for link management,
setup, and disconnection. They do not carry sequence numbers.
Usage: HDLC itself is primarily used for synchronous serial connections,
particularly in wide area networks (WANs) for connecting routers. Cisco's
proprietary default encapsulation for serial interfaces is often a slight
modification of HDLC (cHDLC) that adds a protocol field for multi-protocol
support.
PPP (Point-to-Point Protocol)

PPP is another data link layer protocol primarily used for establishing a
direct connection between two nodes (e.g., a home computer connecting to an
ISP via dial-up, DSL, or cable modem, or two routers connecting over a
WAN link). It's a byte-oriented protocol, meaning it operates on bytes rather
than individual bits.
Key Features and Components:
PPP is more than just a framing protocol; it's a suite of protocols that
provides three main components:
1. HDLC-like Framing:
○ PPP uses a framing method very similar to HDLC, with a flag
byte 01111110 (0x7E) to delimit frames.
○ It uses byte stuffing (or character stuffing) to ensure data
transparency. If the flag byte (0x7E) appears in the data, it's
replaced by a two-byte sequence (0x7D 0x5E). If the escape
byte (0x7D) appears, it's replaced by (0x7D 0x5D).
○ Includes a Protocol field (unique to PPP) that indicates the
network layer protocol being encapsulated (e.g., 0x0021 for IP,
0x8021 for IPv6CP, 0xC021 for LCP). This is a significant
improvement over standard HDLC, which typically assumed a
single higher-layer protocol.
○ Includes an FCS (Frame Check Sequence) for error detection
(CRC).
2. Link Control Protocol (LCP):
○ Responsible for establishing, configuring, testing, maintaining,
and terminating the data link connection.
○ LCP negotiates various link options between the two
communicating devices, such as:
■ Maximum Receive Unit (MRU): Maximum payload size
allowed in a frame.
■ Authentication Protocol: Negotiates which authentication
method (PAP, CHAP, EAP) to use.
■ Compression: Negotiates data compression algorithms.
■ Error Detection: Negotiates the use of magic numbers for
loop detection.
■ Link quality monitoring.
3. Network Control Protocols (NCPs):
○ A family of protocols, one for each network layer protocol that
PPP supports (e.g., IP, IPX, AppleTalk).
○ NCPs are responsible for configuring the network layer
parameters for the specific protocol once the LCP link is
established.
○ IP Control Protocol (IPCP): The most common NCP. It's used
to configure TCP/IP parameters for the connection, such as
assigning IP addresses (often dynamically), DNS server
addresses, etc.
○ IPv6 Control Protocol (IPv6CP): For configuring IPv6
parameters.
Advantages of PPP over HDLC (for dial-up/WAN links):
● Multi-protocol Support: PPP's Protocol field allows it to encapsulate
and multiplex different network layer protocols simultaneously over the
same link (e.g., IP and IPX). Standard HDLC typically supports only
one.
● Authentication: PPP supports robust authentication mechanisms
(PAP, CHAP, EAP) to verify the identity of the connecting peer, which
is crucial for ISP connections. HDLC does not have built-in
authentication.
● Compression: PPP can negotiate and use data compression algorithms,
improving throughput over slower links.
● Link Quality Monitoring: LCP provides mechanisms to monitor the
quality of the link.
● Flexibility: PPP is a very flexible and extensible protocol, allowing
new features to be added through LCP and NCP negotiations.
Usage:
● Dial-up Internet Access: Historically, the most common use for
connecting modems to ISPs.
● DSL and Cable Modems: PPP over Ethernet (PPPoE) is widely used
by DSL and some cable modem providers to establish connections and
authenticate users.
● VPNs: Often used as the underlying data link layer protocol for some
VPN solutions (e.g., PPTP - Point-to-Point Tunneling Protocol).
● WAN Links: Used to encapsulate IP packets over synchronous and
asynchronous serial WAN links.
In summary, CSMA and CSMA/CD are MAC protocols for shared media.
HDLC and PPP are data link layer protocols primarily for point-to-point
serial links, with PPP offering more advanced features like multi-protocol
support, authentication, and negotiation capabilities, making it more suitable
for modern WAN and Internet access scenarios.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy