0% found this document useful (0 votes)
100 views

Chapter 3

This document provides an outline and overview of key topics in data link layer design including: 1. Data link layer design issues such as framing data, providing services to the network layer, and error control. Common services include unacknowledged connectionless, acknowledged connectionless, and connection-oriented. 2. Methods for framing data including character counting, flag bytes with stuffing, starting/ending flags with bit stuffing, and physical layer coding violations. 3. Error control techniques like timers, sequence numbers, error detecting codes, and error correcting codes to ensure reliable delivery of frames.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Chapter 3

This document provides an outline and overview of key topics in data link layer design including: 1. Data link layer design issues such as framing data, providing services to the network layer, and error control. Common services include unacknowledged connectionless, acknowledged connectionless, and connection-oriented. 2. Methods for framing data including character counting, flag bytes with stuffing, starting/ending flags with bit stuffing, and physical layer coding violations. 3. Error control techniques like timers, sequence numbers, error detecting codes, and error correcting codes to ensure reliable delivery of frames.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Chapter 3 - Outline

Data Link Layer Design Issues (Covered in this presentation)

The Channel Allocation Problem (Covered in this presentation)


Local Area Networks and Multiple Access (Assignment 2)

Network Driver Specifications (provided as a separate word file and discussed separately)
Introduction to Queuing Theory (Covered in this presentation)

1. Data Link Layer Design Issues

Data Link Layer Design Issues


The data link layer has a number of specific functions it can carry out. These functions include Providing a well defined service interface to the network layer. Determining how the bits of the physical layer are grouped into frames. Dealing with transmission errors. Regulating the flow of frames so that slow receivers are not swamped by fast senders.

Please Note
Although this chapter is explicitly about the data link layer and the data link protocols, many of the principles we will study here, such as error control and flow control, are found in transport and other protocols as well. In fact, in many networks, these functions are found only in the upper layers and not in the data link layer. However, no matter where they are found, the principles are pretty much the same, so it does not really matter where we study them. In the data link layer they often show up in their simplest and purest forms (only concerned with issues at the frame level; not at the data level), making this a good place to examine them in detail.

Services Provided to the Network Layer


The principal service is transferring data from the network layer on the source machine to the network layer on the destination machine. The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the network layer there.

The data link layer can be designed to offer various services. The actual services offered can vary from system to system. Three reasonable possibilities that are commonly provided are Unacknowledged connectionless service. Acknowledged connectionless service. Acknowledged connection-oriented service.

Services Provided to the Network Layer (Contd)


Unacknowledged connectionless service consists of having the source machine send independent frames to the destination machine without having the destination machine acknowledge them. No logical connection is established beforehand or released afterward. If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover from it in the data link layer. This class of service is appropriate when the error rate is very low so that recovery is left to higher layers. It is also appropriate for real-time traffic, such as voice, in which late data are worse than bad data. Most LANs use unacknowledged connectionless service in the data link layer. When Acknowledge Connectionless service is offered, there are still no logical connections used, but each frame sent is individually acknowledged. In this way, the sender knows whether a frame has arrived correctly. If it has not arrived within a specified time interval, it can be sent again. This service is useful over unreliable channels, such as wireless systems.

Services Provided to the Network Layer (Contd)


It is perhaps worth emphasizing that providing acknowledgements in the data link layer is just an optimization, never a requirement. The higher layer can always send a packet and wait for it to be acknowledged. If the acknowledgement is not forthcoming before the timer expires, the sender can just send the entire message again. The most sophisticated service the data link layer can provide to the network layer is connection-oriented service. With this service, the source and destination machines establish a connection before any data are transferred. Each frame sent over the connection is numbered, and the data link layer guarantees that each frame sent is indeed received.

Framing
The whole idea of framing is the need for synchronized communication by resolving a common meaning to a Frame. To provide service to the network layer, the data link layer must use the service provided to it by the physical layer. What the physical layer does is accept a raw bit stream and attempt to deliver it to the destination. This bit stream is not guaranteed to be error free. The number of bits received may be less than, equal to, or more than the number of bits transmitted, and they may have different values. It is up to the data link layer to detect and, if necessary, correct errors. Breaking the bit stream up into frames is more difficult than it at first appears. One way to achieve this framing is to insert time gaps between frames, much like the spaces between words in ordinary text. However, networks rarely make any guarantees about timing, so it is possible these gaps might be squeezed out or other gaps might be inserted during transmission

Framing (Contd)
Since it is too risky to count on timing to mark the start and end of each frame, other methods have been devised. Four methods can be discussed: Character count. Flag bytes with byte stuffing. Starting and ending flags, with bit stuffing. Physical layer coding violations

Framing (Contd)
Character Count This method uses a field in the header to specify the number of characters in the frame. When the data link layer at the destination sees the character count, it knows how many characters follow, and hence where the end of the frame is. The disadvantage is that if the count is garbled by a transmission error, the destination will lose synchronization and will be unable to locate the start of the next frame. So, this method is rarely used.. Character stuffing In the second method, each frame starts with the ASCII character sequence DLE STX and ends with the sequence DLE ETX.(where DLE is Data Link Escape, STX is Start of TeXt and ETX is End of TeXt.) This method overcomes the drawbacks of the character count method. If the destination ever loses synchronization, it only has to look for DLE STX and DLE ETX characters. If however, binary data is being transmitted then there exists a possibility of the characters DLE STX and DLE ETX occurring in the data. Since this can interfere with the framing, a technique called character stuffing is used. The sender's data link layer inserts an ASCII DLE character just before the DLE character in the data. The receiver's data link layer removes this DLE before this data is given to the network layer. However character stuffing is closely associated with 8-bit characters and this is a major hurdle in transmitting arbitrary sized characters.

Framing (Contd)
Bit stuffing The third method allows data frames to contain an arbitrary number of bits and allows character codes with an arbitrary number of bits per character. At the start and end of each frame is a flag byte consisting of the special bit pattern 01111110 . Whenever the sender's data link layer encounters five consecutive 1s in the data, it automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit stuffing. When the receiver sees five consecutive 1s in the incoming data stream, followed by a zero bit, it automatically destuffs the 0 bit. The boundary between two frames can be determined by locating the flag pattern. Physical layer coding violations The final framing method is physical layer coding violations and is applicable to networks in which the encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The combinations of lowlow and high-high which are not used for data may be used for marking frame boundaries. As a final note on framing, many data link protocols use a combination of a character count with one of the other methods for extra safety. When a frame arrives, the count field is used to locate the end of the frame. Only if the appropriate delimiter is present at that position and if the checksum is correct the frame accepted as valid. Otherwise, the input stream is scanned for the next delimiter.

Error Control
How to make sure all frames are eventually delivered to the network layer at the destination and in the proper order. The usual way to ensure reliable delivery is to provide the sender with some feedback about what is happening at the other end of the line. Additional complication comes from the possibility that hardware troubles may cause a frame to vanish completely (e.g., in a noise burst). In this case, the receiver will not react at all, since it has no reason to react. This possibility is dealt with by introducing timers into the data link layer. When the sender transmits a frame, it generally also starts a timer. The timer is set to expire after an interval long enough for the frame to reach the destination, be processed there, and have the acknowledgement propagate back to the sender. The whole issue of managing the timers and sequence numbers so as to ensure that each frame is ultimately passed to the network layer at the destination exactly once, no more and no less, is an important part of the data link layer's duties.

Error Control (Contd)


Network designers have developed two basic strategies for dealing with errors. One way is to include enough redundant information along with each block of data sent, to enable the receiver to deduce what the transmitted data must have been. The other way is to include only enough redundancy to allow the receiver to deduce that an error occurred, but not which error, and have it request a retransmission. The former strategy uses error-correcting codes and the latter uses error-detecting codes. The use of error-correcting codes is often referred to as forward error correction. Each of these techniques occupies a different ecological niche. On channels that are highly reliable, such as fiber, it is cheaper to use an error detecting code and just retransmit the occasional block found to be faulty. However, on channels such as wireless links that make many errors, it is better to add enough redundancy to each block for the receiver to be able to figure out what the original block was, rather than relying on a retransmission, which itself may be in error.

Flow Control
Another important design issue that may occur in the data link layer (and higher layers as well) is what to do with a sender that systematically wants to transmit frames faster than the receiver can accept them. This situation can easily occur when the sender is running on a fast (or lightly loaded) computer and the receiver is running on a slow (or heavily loaded) machine . Two approaches are commonly used. In the first one, feedback-based flow control, the receiver sends back information to the sender giving it permission to send more data or at least telling the sender how the receiver is doing. In the second one, rate-based flow control, the protocol has a built-in mechanism that limits the rate at which senders may transmit data, without using feedback from the receiver. Rate-based schemes are never used in the data link layer.

Automatic Repeat reQuest (ARQ)


Can be considered as a method of ACKNOWLEDGMENT management Automatic Repeat reQuest (ARQ), also known as Automatic Repeat Query, is an error-control method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a data frame or packet) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received) to achieve reliable data transmission over an unreliable service. These protocols reside in the Data Link or Transport Layers of the OSI model. In most Networks, this functionality is provided by the Transport Layer The types of ARQ protocols include
Stop-and-wait ARQ Go-Back-N ARQ Selective Repeat ARQ

Automatic Repeat reQuest (ARQ) (Contd)


Stop-and-wait ARQ is a method used in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest kind of automatic repeatrequest (ARQ) method. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with both transmit and receive window sizes equal to 1. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a good frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which the sending process continues to send a number of frames specified by a window size even without receiving an acknowledgement (ACK) packet from the receiver. It is a special case of the general sliding window protocol with the transmit window size of N and receive window size of 1.

Automatic Repeat reQuest (ARQ) (Contd)


Selective Repeat ARQ may be used as a protocol for the delivery and acknowledgement of message units, or it may be used as a protocol for the delivery of subdivided message sub-units. When used as the protocol for the delivery of messages, the sending process continues to send a number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ, the receiving process will continue to accept and acknowledge frames sent after an initial error; this is the general case of the sliding window protocol with both transmit and receive window sizes greater than 1. E.g: TCP (Transport Layer)

2. The Channel Allocation Problem

The Channel Allocation Problem


The central theme of this discussion is how to allocate a single
contested channel among competing users . = Media Access Rules / Techniques Two possible generalizations of Channel Allocation Static Channel Allocation = Multiplexing Dynamic Channel Allocation = Media Access Rule

** Your assignment on Ethernet should cover one Dynamic Channel Allocation Method CSMA/CD

Static Channel Allocation

But none of the traditional static channel allocation methods work well with bursty network traffic, which is a common feature of most LANs.

Dynamic Channel Allocation


In commonality to all dynamic channel allocation schemes, there are five key assumptions. 1. Station Model. The model consists of N independent stations (e.g., computers, telephones, or personal communicators), each with a program or user that generates frames for transmission. 2. Single Channel Assumption. A single channel is available for all communication. All stations can transmit on it and all can receive from it. 3. Collision Assumption. If two frames are transmitted simultaneously, they overlap in time and the resulting signal is garbled. This event is called a collision. All stations can detect collisions. 4a. Continuous Time. Frame transmission can begin at any instant. 4b. Slotted Time. Time is divided into discrete intervals (slots). Frame transmissions always begin at the start of a slot. 5a. Carrier Sense. Stations can tell if the channel is in use before trying to use it. If the channel is sensed as busy, no station will attempt to use it until it goes idle 5b. No Carrier Sense. Stations cannot sense the channel before trying to use it. They just go ahead and transmit. Examples of the dynamic channel allocation schemes include ALOHA, CSMA/CD (Ethernet), CSMA/CA, Token Passing etc. In your assignment on Ethernet, you will include discussions on the contention based channel allocation scheme CSMA/CD, and wireless channel allocation schemes will be discussed in Chapter 7.

Probabilistic versus Deterministic


The two main classes of access methods are: Probabilistic (CSMA/CD and CSMA/CA) Deterministic (Token Passing and Polling)

With a probabilistic media access method, a node checks the line when it wants to transmit. If the line is busy, or if the node's transmission collides with another transmission, the transmission is cancelled. The node then waits a random amount of time before trying again. The most widely used access method of such type is CSMA/CD. With deterministic media access method, nodes get access to the network in a predetermined sequence. Either a server or the arrangement of the nodes themselves determines the sequence. The two most widely used deterministic access methods are token passing (used in Token Ring) and polling (used in main frame environments).

3. Introduction to Queuing Theory

Introduction to Queuing Theory


Each one of us has spent a great deal of time waiting in lines. One example in the Cafeteria Other examples of queues are Printer queue Packets arriving to a buffer (Switching/Routing device) Calls waiting for answer by a technical support Applications waiting for the service of a microprocessor etc

What makes up a queue?


The System: A collection of objects under study It is important to define the system boundaries The Entities: The people, packets, or objects that enter the system requiring some kind of service The Servers: The people, resources, or servers that perform the service required The Queue: An accumulation of entities that have entered the system but have not been served

Queue Discipline
First Come First Served - FCFS Most customer queues Last Come First Served - LCFS Packages, Elevator Served in Random Order - SIRO Entering Buses Priority Service Multi-processing on a computer Emergency room

Queuing System Structure: Single Server

Single Queue, Multiple Servers

Multiple Single-server Queues

What factors effect system performance


The Arrivals Process The time between any two successive arrivals Does this depend on the number of packets in the system? Finite populations The Service Process The time taken to perform the service Does this depend on the number of packets in the system? The number of servers operating in system The Service Discipline System Capacity Processes waiting + processes being served

Measuring System Performance


The total time an entity spends in the system (Denoted by W) The time an entity spends in the queue (Denoted by Wq) The number of entities in the system (Denoted by L) The number of entities in the queue (Denoted by Lq) The percentage of time the servers are busy (Utilization time) These quantities are variable over time

What is Queuing Theory?


Primary methodological framework for analyzing network delay Often requires simplifying assumptions since realistic assumptions make meaningful analysis extremely difficult Provide a basis for adequate delay approximation

queue

Packet Delay
Packet delay is the sum of delays on each subnet link traversed by the packet Link delay consists of: Processing delay Queuing delay Transmission delay link delay Propagation delay
node node packet delay node

Link Delay Components (1)


Processing delay Delay between the time the packet is correctly received at the head node of the link and the time the packet is assigned to an outgoing link queue for transmission
processing delay

outgoing link queue

head node

tail node

Link Delay Components (2)


Queuing delay Delay between the time the packet is assigned to a queue for transmission and the time it starts being transmitted
queuing delay
outgoing link queue

head node

tail node

Link Delay Components (3)


Transmission delay Delay between the times that the first and last bits of the packet are transmitted
transmission delay outgoing link queue

head node

tail node

Link Delay Components (4)


Propagation delay Delay between the time the last bit is transmitted at the head node of the link and the time the last bit is received at the tail node
propagation delay outgoing link queue

head node

tail node

Queuing System (1)


Customers (= packets) arrive at random times to obtain service Service time (= transmission delay) is L/C L : Packet length in bits C : Link transmission capacity in bits/sec
customer (= packet) service (= packet transmission)

queue

Queuing System (2)


Assume that we already know: Customer arrival rate Customer service rate We want to know: Average number of customers in the system Average delay per customer

average delay customer arrival rate customer service rate

average # of customers

Queuing Theory.
Answering of these questions gave rise to different theorems and notations. One universal need, though, is the appropriate modeling of the arrival and service processes. We avoid the detail mathematical discussion here (included in Probability and Statistics and likely covered on your Operation Research courses). Interested students, however, can pursue further readings on Probability, Statistics and System Modeling; but this is not in the scope of the course. Case Study: Asses the amount of round trip delay you may have in pinging a particular server at different times of the day

In Summary
Queuing models provide qualitative insights on the performance of computer networks, and quantitative predictions of average packet delay. To obtain tractable queuing models for computer networks, it is frequently necessary to make simplifying assumptions. A more accurate alternative is simulation, which, however, can be slow and expensive.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy