Networking Part 2
Networking Part 2
Networking Part 2
The window size shrinks to such an extent that the data being transmitted is smaller than TCP
Header.
Sender should send only the first byte on receiving one byte data from the application.
Sender should buffer all the rest bytes until the outstanding byte gets acknowledged.
After receiving the acknowledgement, sender should send the buffered data in one TCP
segment. Then, sender should buffer the data again until the previously sent data gets
acknowledged.
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.
Go-Back-N ARQ
Selective Repeat ARQ
Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data
link layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost, all
subsequent frames have to be sent again.
The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the
sender window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a corrupted
frame. When the timer expires, the sender sends the correct frame again. The design of the
Go-Back-N ARQ protocol is shown below.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design
of the Selective Repeat ARQ protocol is shown below.
Q 3. block cipher
A block cipher takes a block of plaintext bits and generates a block of ciphertext bits, generally
of same size. The size of block is fixed in the given scheme. The choice of block size does not
directly affect to the strength of encryption scheme. The strength of cipher depends up on the
key length.
Block Size
Though any size of block is acceptable, following aspects are borne in mind while selecting a
size of a block.
Avoid very small block size − Say a block size is m bits. Then the possible plaintext bits
combinations are then 2m. If the attacker discovers the plain text blocks corresponding to some
previously sent ciphertext blocks, then the attacker can launch a type of ‘dictionary attack’ by
building up a dictionary of plaintext/ciphertext pairs sent using that encryption key. A larger block
size makes attack harder as the dictionary needs to be larger.
Do not have very large block size − With very large block size, the cipher becomes inefficient to
operate. Such plaintexts will need to be padded before being encrypted.
Too much padding makes the system inefficient. Also, padding may render the system insecure
at times, if the padding is done with same bits always.
Digital Encryption Standard (DES) − The popular block cipher of the 1990s. It is now considered
as a ‘broken’ block cipher, due primarily to its small key size.
Triple DES − It is a variant scheme based on repeated DES applications. It is still a respected
block ciphers but inefficient compared to the new faster block ciphers available.
Advanced Encryption Standard (AES) − It is a relatively new block cipher based on the
encryption algorithm Rijndael that won the AES design competition.
IDEA − It is a sufficiently strong block cipher with a block size of 64 and a key size of 128 bits. A
number of applications use IDEA encryption, including early versions of Pretty Good Privacy
(PGP) protocol. The use of IDEA scheme has a restricted adoption due to patent issues.
Twofish − This scheme of block cipher uses block size of 128 bits and a key of variable length. It
was one of the AES finalists. It is based on the earlier block cipher Blowfish with a block size of
64 bits.
Serpent − A block cipher with a block size of 128 bits and key lengths of 128, 192, or 256 bits,
which was also an AES competition finalist. It is a slower but has more secure design than other
block cipher.
Frame Relay is a packet-switching network protocol that is designed to work at the data link
layer of the network. It is used to connect Local Area Networks (LANs) and transmit data across
Wide Area Networks (WANs). It is a better alternative to a point-to-point network for connecting
multiple nodes that require separate dedicated links to be established between each pair of
nodes. It allows transmission of different size packets and dynamic bandwidth allocation. Also, it
provides a congestion control mechanism to reduce the network overheads due to congestion. It
does not have an error control and flow management mechanism.
Working:
Frame relay switches set up virtual circuits to connect multiple LANs to build a WAN. Frame
relay transfers data between LANs across WAN by dividing the data in packets known as
frames and transmitting these packets across the network. It supports communication with
multiple LANs over the shared physical links or private lines.
Frame relay network is established between Local Area Networks (LANs) border devices such
as routers and service provider network that connects all the LAN networks. Each LAN has an
access link that connects routers of LAN to the service provider network terminated by the frame
relay switch. The access link is the private physical link used for communication with other LAN
networks over WAN. The frame relay switch is responsible for terminating the access link and
providing frame relay services.
For data transmission, LAN’s router (or other border device linked with access link) sends the
data packets over the access link. The packet sent by LAN is examined by a frame relay switch
to get the Data Link Connection Identifier (DLCI) which indicates the destination of the packet.
Frame relay switch already has the information about addresses of the LANs connected to the
network hence it identifies the destination LAN by looking at DLCI of the data packet. DLCI
basically identifies the virtual circuit (i.e. logical path between nodes that doesn’t really exist)
between source and destination network. It configures and transmits the packet to frame relay
switch of destination LAN which in turn transfers the data packet to destination LAN by sending
it over its respective access link. Hence, in this way, a LAN is connected with multiple other
LANs by sharing a single physical link for data transmission.
ATM is a technology that has some event in the development of broadband ISDN in the 1970s
and 1980s, which can be considered an evolution of packet switching. Each cell is 53 bytes long
– 5 bytes header and 48 bytes payload. Making an ATM call requires first sending a message to
set up a connection.
Subsequently, all cells follow the same path to the destination. It can handle both constant rate
traffic and variable rate traffic. Thus it can carry multiple types of traffic with end-to-end quality of
service. ATM is independent of a transmission medium, they may be sent on a wire or fiber by
themselves or they may also be packaged inside the payload of other carrier systems. ATM
networks use “Packet” or “cell” Switching with virtual circuits. Its design helps in the
implementation of high-performance multimedia networking.
ATM Cell Format –
As information is transmitted in ATM in the form of fixed-size units called cells. As known
already each cell is 53 bytes long which consists of a 5 bytes header and 48 bytes payload.
Asynchronous Transfer Mode can be of two format types which are as follows:
OSI stands for Open Systems Interconnection. It has been developed by ISO – ‘International
Organization for Standardization.
At the sender’s side: The transport layer receives the formatted data from the upper layers,
performs Segmentation, and also implements Flow & Error control to ensure proper data
transmission. It also adds Source and Destination port numbers in its header and forwards the
segmented data to the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For example,
when a web application requests a web server, it typically uses port number 80, because this is
the default port assigned to web applications. Many applications have default ports assigned.
At the receiver’s side: Transport Layer reads the port number from its header and forwards the
Data which it has received to the respective application. It also performs sequencing and
reassembling of the segmented data.