Chapter 3
Chapter 3
Network Driver Specifications (provided as a separate word file and discussed separately)
Introduction to Queuing Theory (Covered in this presentation)
Please Note
Although this chapter is explicitly about the data link layer and the data link protocols, many of the principles we will study here, such as error control and flow control, are found in transport and other protocols as well. In fact, in many networks, these functions are found only in the upper layers and not in the data link layer. However, no matter where they are found, the principles are pretty much the same, so it does not really matter where we study them. In the data link layer they often show up in their simplest and purest forms (only concerned with issues at the frame level; not at the data level), making this a good place to examine them in detail.
The data link layer can be designed to offer various services. The actual services offered can vary from system to system. Three reasonable possibilities that are commonly provided are Unacknowledged connectionless service. Acknowledged connectionless service. Acknowledged connection-oriented service.
Framing
The whole idea of framing is the need for synchronized communication by resolving a common meaning to a Frame. To provide service to the network layer, the data link layer must use the service provided to it by the physical layer. What the physical layer does is accept a raw bit stream and attempt to deliver it to the destination. This bit stream is not guaranteed to be error free. The number of bits received may be less than, equal to, or more than the number of bits transmitted, and they may have different values. It is up to the data link layer to detect and, if necessary, correct errors. Breaking the bit stream up into frames is more difficult than it at first appears. One way to achieve this framing is to insert time gaps between frames, much like the spaces between words in ordinary text. However, networks rarely make any guarantees about timing, so it is possible these gaps might be squeezed out or other gaps might be inserted during transmission
Framing (Contd)
Since it is too risky to count on timing to mark the start and end of each frame, other methods have been devised. Four methods can be discussed: Character count. Flag bytes with byte stuffing. Starting and ending flags, with bit stuffing. Physical layer coding violations
Framing (Contd)
Character Count This method uses a field in the header to specify the number of characters in the frame. When the data link layer at the destination sees the character count, it knows how many characters follow, and hence where the end of the frame is. The disadvantage is that if the count is garbled by a transmission error, the destination will lose synchronization and will be unable to locate the start of the next frame. So, this method is rarely used.. Character stuffing In the second method, each frame starts with the ASCII character sequence DLE STX and ends with the sequence DLE ETX.(where DLE is Data Link Escape, STX is Start of TeXt and ETX is End of TeXt.) This method overcomes the drawbacks of the character count method. If the destination ever loses synchronization, it only has to look for DLE STX and DLE ETX characters. If however, binary data is being transmitted then there exists a possibility of the characters DLE STX and DLE ETX occurring in the data. Since this can interfere with the framing, a technique called character stuffing is used. The sender's data link layer inserts an ASCII DLE character just before the DLE character in the data. The receiver's data link layer removes this DLE before this data is given to the network layer. However character stuffing is closely associated with 8-bit characters and this is a major hurdle in transmitting arbitrary sized characters.
Framing (Contd)
Bit stuffing The third method allows data frames to contain an arbitrary number of bits and allows character codes with an arbitrary number of bits per character. At the start and end of each frame is a flag byte consisting of the special bit pattern 01111110 . Whenever the sender's data link layer encounters five consecutive 1s in the data, it automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit stuffing. When the receiver sees five consecutive 1s in the incoming data stream, followed by a zero bit, it automatically destuffs the 0 bit. The boundary between two frames can be determined by locating the flag pattern. Physical layer coding violations The final framing method is physical layer coding violations and is applicable to networks in which the encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The combinations of lowlow and high-high which are not used for data may be used for marking frame boundaries. As a final note on framing, many data link protocols use a combination of a character count with one of the other methods for extra safety. When a frame arrives, the count field is used to locate the end of the frame. Only if the appropriate delimiter is present at that position and if the checksum is correct the frame accepted as valid. Otherwise, the input stream is scanned for the next delimiter.
Error Control
How to make sure all frames are eventually delivered to the network layer at the destination and in the proper order. The usual way to ensure reliable delivery is to provide the sender with some feedback about what is happening at the other end of the line. Additional complication comes from the possibility that hardware troubles may cause a frame to vanish completely (e.g., in a noise burst). In this case, the receiver will not react at all, since it has no reason to react. This possibility is dealt with by introducing timers into the data link layer. When the sender transmits a frame, it generally also starts a timer. The timer is set to expire after an interval long enough for the frame to reach the destination, be processed there, and have the acknowledgement propagate back to the sender. The whole issue of managing the timers and sequence numbers so as to ensure that each frame is ultimately passed to the network layer at the destination exactly once, no more and no less, is an important part of the data link layer's duties.
Flow Control
Another important design issue that may occur in the data link layer (and higher layers as well) is what to do with a sender that systematically wants to transmit frames faster than the receiver can accept them. This situation can easily occur when the sender is running on a fast (or lightly loaded) computer and the receiver is running on a slow (or heavily loaded) machine . Two approaches are commonly used. In the first one, feedback-based flow control, the receiver sends back information to the sender giving it permission to send more data or at least telling the sender how the receiver is doing. In the second one, rate-based flow control, the protocol has a built-in mechanism that limits the rate at which senders may transmit data, without using feedback from the receiver. Rate-based schemes are never used in the data link layer.
** Your assignment on Ethernet should cover one Dynamic Channel Allocation Method CSMA/CD
But none of the traditional static channel allocation methods work well with bursty network traffic, which is a common feature of most LANs.
With a probabilistic media access method, a node checks the line when it wants to transmit. If the line is busy, or if the node's transmission collides with another transmission, the transmission is cancelled. The node then waits a random amount of time before trying again. The most widely used access method of such type is CSMA/CD. With deterministic media access method, nodes get access to the network in a predetermined sequence. Either a server or the arrangement of the nodes themselves determines the sequence. The two most widely used deterministic access methods are token passing (used in Token Ring) and polling (used in main frame environments).
Queue Discipline
First Come First Served - FCFS Most customer queues Last Come First Served - LCFS Packages, Elevator Served in Random Order - SIRO Entering Buses Priority Service Multi-processing on a computer Emergency room
queue
Packet Delay
Packet delay is the sum of delays on each subnet link traversed by the packet Link delay consists of: Processing delay Queuing delay Transmission delay link delay Propagation delay
node node packet delay node
head node
tail node
head node
tail node
head node
tail node
head node
tail node
queue
average # of customers
Queuing Theory.
Answering of these questions gave rise to different theorems and notations. One universal need, though, is the appropriate modeling of the arrival and service processes. We avoid the detail mathematical discussion here (included in Probability and Statistics and likely covered on your Operation Research courses). Interested students, however, can pursue further readings on Probability, Statistics and System Modeling; but this is not in the scope of the course. Case Study: Asses the amount of round trip delay you may have in pinging a particular server at different times of the day
In Summary
Queuing models provide qualitative insights on the performance of computer networks, and quantitative predictions of average packet delay. To obtain tractable queuing models for computer networks, it is frequently necessary to make simplifying assumptions. A more accurate alternative is simulation, which, however, can be slow and expensive.