0% found this document useful (0 votes)
15 views

University Question Answers Chap 5

Uploaded by

munchingsilver7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

University Question Answers Chap 5

Uploaded by

munchingsilver7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 05

University questions and answers


Q. Explain the open loop congestion control and closed loop congestion control
policies in detail
Q. Compare Open Loop congestion control and Close Loop congestion Control

Congestion Control:
Too many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion. In other words, congestion in a network may
occur if the load on the network, the number of packets sent to the network, is greater than the
capacity of the network (the number of packets a network can handle).
The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it and
must ultimately determine what to do with the excess packets. However, the most effective way
to control congestion is to reduce the load that the transport layer is placing on the network.
Congestion control refers to the mechanisms and techniques to control the congestion and keep
the load below the capacity.
Effects of Congestion:
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation even worse

Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened.
The General Principles of Congestion Control are as follows: Open Loop Principle:
attempt to prevent congestion from happening
after system is running, no corrections made Closed Loop Principle:
monitor system to detect congestion

pass information to where action is taken


adjust system operation to correct problem

Open Loop Congestion Control:


In open-loop congestion control, policies are applied to prevent congestion before it happens.
In these mechanisms, congestion control is handled by either the source or the destination.
The list of policies that can prevent congestion are:
Retransmission Policy:
It is the policy in which retransmission of the packets are taken care. If the sender feels that a
sent packet is lost or corrupted, the packet needs to be retransmitted. This transmission may
increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and also
able to optimize efficiency
Window Policy
The type of window at the sender side may also affect the congestion. Several packets in the
Go-back-n window are resent, although some packets may be received successfully at the
receiver side. This duplication may increase the congestion in the network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may
have been lost.
Acknowledgement Policy
Since acknowledgement are also the part of the load in network, the acknowledgment policy
imposed by the receiver may also affect congestion. Several approaches can be used to prevent
congestion related to acknowledgment.

The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if it
has to send a packet or a timer expires.
Discarding Policy
A good discarding policy adopted by the routers is that the routers may prevent congestion and
at the same time partially discards the corrupted or less sensitive package and also able to
maintain the quality of a message.
In case of audio file transmission, routers can discard fewer sensitive packets to prevent
congestion and also maintain the quality of the audio file.
Admission Policy
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further. If
there is a chance of a congestion or there is a congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.
Closed Loop Congestion Control
Closed-Loop congestion control mechanisms try to reduce effects of congestion after it
happens. Back Pressure:
Backpressure is a technique in which a congested node stops receiving packet from upstream
node. This may cause the upstream node or nodes to become congested and rejects receiving
data from above nodes. Backpressure is a node-to-node congestion control technique that
propagate in the opposite direction of data flow. The backpressure technique can be applied
only to virtual circuit where each node has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node
may be get congested due to slowing down of the output data flow. Similarly, 1st node may get
congested and informs the source to slow down.
Choke Packet:
Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router
monitors its resources and the utilization at each of its output lines. whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly
sends a choke packet to the source giving it a feedback to reduce the traffic. The intermediate
nodes through which the packets have traveled are not warned about congestion.
Implicit Signaling:
In this method, there is no communication between congested node or nodes and the source.
The source guesses that there is congestion somewhere in the network from other symptoms.
For example, when source sends several packets and there is no acknowledgment for a while,
one assumption is that network is congested and source should slow down.
Explicit Signaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling: In forward signaling signal is sent in the direction of the congestion. The
destination is warned about congestion. The receiver in this case adopts policies to prevent
further congestion.
Backward Signaling: In backward signaling signal is sent in the opposite direction of the
congestion. The source is warned about congestion and it needs to slow down.

Q. Explain the TCP connection establishment and Connection release.


Q. Explain three way handshake technique in TCP.

TCP Connection
TCP is connection-oriented. a connection-oriented transport protocol establishes a logical path
between the source and destination.
In TCP, connection-oriented transmission requires three phases: connection establishment, data
transfer, and connection termination.
Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are connected, they
are
able to send segments to each other simultaneously.
Three-Way Handshaking: The connection establishment in TCP is called three-way
handshaking. The process starts with the server. The server program tells its TCP that it is
ready to accept a connection. This request is called a passive open. Although the server TCP is
ready to accept a connection from any machine in the world, it cannot make the connection
itself.
The client program issues a request for an active open. A client that wishes to connect to an
open server tells its TCP to connect to a particular server. TCP can now start the three-way
handshaking process.
1. The client sends the first segment, a SYN segment, in which only the SYN flag is set. This
segment is for synchronization of sequence numbers. The client in our example chooses a
random number as the first sequence number and sends this number to the server. This
sequence number is called the initial sequence number (ISN).
A SYN segment cannot carry data, but it consumes one sequence number.
2. The server sends the second segment, a SYN + ACK segment with two flag bits set as:
SYN and ACK. This segment has a dual purpose.A SYN + ACK segment cannot carry data,
but it does consume one sequence number.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the
receipt of the second segment with the ACK flag and acknowledgment number field. An
ACK segment, if carrying no data, consumes no sequence number.

Connection Termination:
Using Three-Way Handshaking:
Half Close:

Q.Differentiate between TCP and UDP.

Transmission Control Protocol User Datagram Protocol


Basis (TCP) (UDP)

UDP is the Datagram-


oriented protocol. This is
TCP is a connection-oriented protocol. because there is no
Connection orientation means that the overhead for opening a
communicating devices should connection, maintaining a
Type of Service
establish a connection before connection, or terminating
transmitting data and should close the a connection. UDP is
connection after transmitting the data. efficient for broadcast and
multicast types of network
transmission.

TCP is reliable as it guarantees the The delivery of data to the


Reliability delivery of data to the destination destination cannot be
router. guaranteed in UDP.

TCP provides extensive error-


UDP has only the basic
Error checking checking mechanisms. It is because it
error-checking mechanism
mechanism provides flow control and
using checksums.
acknowledgment of data.

An acknowledgment segment is No acknowledgment


Acknowledgment
present. segment.
Transmission Control Protocol User Datagram Protocol
Basis (TCP) (UDP)

There is no sequencing of
Sequencing of data is a feature of
data in UDP. If the order is
Transmission Control Protocol (TCP).
Sequence required, it has to be
this means that packets arrive in order
managed by the
at the receiver.
application layer.

UDP is faster, simpler, and


Speed TCP is comparatively slower than UDP.
more efficient than TCP.

There is no retransmission
Retransmission of lost packets is
Retransmission of lost packets in the User
possible in TCP, but not in UDP.
Datagram Protocol (UDP).

TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-
Header Length
header. length header.

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless


Techniques SYN-ACK protocol i.e. No handshake

UDP supports
Broadcasting TCP doesn’t support Broadcasting.
Broadcasting.

UDP is used
TCP is used by HTTP, by DNS , DHCP ,
Protocols
HTTPs , FTP , SMTP and Telnet . TFTP, SNMP , RIP ,
and VoIP .

UDP connection is a
Stream Type The TCP connection is a byte stream.
message stream.

Overhead Low but higher than UDP. Very low.

This protocol is used in


situations where quick
This protocol is primarily utilized in
communication is
situations when a safe and trustworthy
necessary but where
Applications communication procedure is necessary,
dependability is not a
such as in email, on the web surfing,
concern, such as VoIP,
and in military services.
game streaming, video,
and music streaming, etc.
Q.Explain Slow-Start algorithm for TCP’s congestion handling policy.

TCP Congestion Control:


The sender's window size is determined not only by the receiver but also by congestion in the
network. The sender has two pieces of information: the receiver-advertised window size and
the congestion window size. The actual size of the window is the minimum of these two.
Actual window size = minimum (rwnd, cwnd)
Congestion Policy
TCP's general policy for handling congestion is based on three phases: slow start, congestion
avoidance, and congestion detection. In the slow-start phase, the sender starts with a very slow
rate of transmission, but increases the rate rapidly to reach a threshold. When the threshold is
reached, the data rate is reduced to avoid congestion. Finally if congestion is detected, the
sender goes back to the slow-start or congestion avoidance phase based on how the congestion
is detected.
Slow Start: Exponential Increase:

The slow-start algorithm is based on the idea that the size of the congestion window (cwnd)
starts with one maximum segment size (MSS), but it increases one MSS each time an
acknowledgment arrives.

Congestion Avoidance: Additive Increase


To avoid congestion before it happens, we must slow down this exponential growth. TCP
defines another algorithm called congestion avoidance, which increases the cwnd additively
instead of exponentially. When the size of the congestion window reaches the slow-start
threshold in the case where cwnd = i, the slow-start phase stops and the additive phase begins.
In this algorithm, each time the whole “window” of segments is acknowledged, the size of the
congestion window is increased by one. A window is the number of segments transmitted
during RTT.
Congestion Detection: Multiplicative Decrease:
If congestion occurs, the congestion window size must be decreased. The only way the sender
can guess that congestion has occurred is by the need to retransmit a segment. However,
retransmission can occur in one of two cases: when a timer times out or when three Duplicate
ACKs are received. In both cases, the size of the threshold is dropped to one-half, a
multiplicative decrease.
TCP implementations have two reactions:
1. If a time-out occurs, there is a stronger possibility of congestion; a segment has probably
been dropped in the network, and there is no news about the sent segments.
In this case TCP reacts strongly:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.
2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery.
In this case, TCP has a weaker reaction:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the value of the threshold.
c. It starts the congestion avoidance phase.
An implementations reacts to congestion detection in one of the following ways:
If detection is by time-out, a new slow-start phase starts.
If detection is by three ACKs, a new congestion avoidance phase starts.
Q.Write a program for client server application using Socket Programming

Socket Programming:
A typical network application consists of a pair of programs—a client program and a server
program— residing in two different end systems. When these two programs are executed, a
client process and a server process are created, and these processes communicate with each
other by reading from, and writing to, sockets. When creating a network application, the
developer’s main task is therefore to write the code for both the client and server programs,
called socket programming.
There are two types of network applications. One type is an implementation whose operation
is specified in a protocol standard, such as an RFC or some other standards document; such an
application is sometimes referred to as “open,” since the rules specifying its operation are
known to all. For such an implementation, the client and server programs must conform to the
rules dictated by the RFC.
The other type of network application is a proprietary network application. In this case the
client and server programs employ an application-layer protocol that has not been openly
published in an RFC or elsewhere. A single developer (or development team) creates both the
client and server programs, and the developer has complete control over what goes in the code.
But because the code does not implement an open protocol, other independent developers will
not be able to develop code that interoperates with the application.
During the development phase, one of the first decisions the developer must make is whether
the application is to run over TCP or over UDP.
When a web page is opened, automatically a socket program is initialized to receive/send to
the process. The socket program at the source communicates with the socket program at the
destination machine with the associated source port/destination port numbers. When a web
page is terminated, automatically the socket programs will be terminated.
1 . server.c
#include<stdio.h>
#include<unistd.h>
#include<string.h>
#include<sys/socket.h>
#include<stdlib.h>
#include<netinet/in.h>
#include<sys/types.h>
#define MAXLINE 20
#define SERV_PORT 5777 main(int argc,char *argv) {
int i,j;
ssize_t n;
char line[MAXLINE],revline[MAXLINE];
int listenfd,connfd,clilen;
struct sockaddr_in servaddr,cliaddr;
listenfd=socket(AF_INET,SOCK_STREAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family=AF_INET;
servaddr.sin_port=htons(SERV_PORT);
bind(listenfd,(struct sockaddr*)&servaddr,sizeof(servaddr));
listen(listenfd,1);
for(;;)
{
clilen=sizeof(cliaddr);
connfd=accept(listenfd,(struct sockaddr*)&cliaddr,&clilen);
printf("CONNECT TO CLIENT\n");
while(1)
{ if((n=read(connfd,line,MAXLINE))==0)
break;
line[n-1]='\0';
j=0;
for(i=n-2;i>=0;i--)
revline[j++]=line[i];
revline[j]='\0';
write(connfd,revline,n);}
}
}
2. client.c
#include<stdio.h>
#include<unistd.h>
#include<string.h>
#include<sys/socket.h>
#include<stdlib.h>
#include<netinet/in.h> #include<sys/types.h>
#define MAXLINE 20
#define SERV_PORT 5777
main(int argc,char *argv)
{ char sendline[MAXLINE],revline[MAXLINE]; int sockfd;
struct sockaddr_in servaddr; sockfd=socket(AF_INET,SOCK_STREAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family=AF_INET;servaddr.sin_port=ntohs(SERV_PORT);
connect(sockfd,(struct sockaddr*)&servaddr,sizeof(servaddr));
printf("Enter the data to be sent\n");
while(fgets(sendline,MAXLINE,stdin)!=NULL)
{
write(sockfd,sendline,strlen(sendline));
printf("\n Line sent");
read(sockfd,revline,MAXLINE);
printf("\nReverse of the given sentence is %s",revline);
printf("\n");
}
exit(0); }

Q.What are transport service primitives.


Q.Write short note on Berkely Socket .

Q. Draw and Explain TCP and UDP header and also write its function.

USER DATAGRAM PROTOCOL


The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol. If a
process wants to send a small message and does not care much about reliability, it can use UDP.

User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes made of four fields,
each of 2 bytes (16 bits). The first two fields define the source and destination port numbers.
The third field defines the total length of the user datagram, header plus data. The 16 bits can
define a total length of 0 to 65,535 bytes. However, the total length needs to be less because a
UDP user datagram is stored in an IP datagram with the total length of 65,535 bytes. The last
field can carry the optional checksum.
UDP Services:
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP
addresses and port numbers.
Connectionless Services
UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if
they are coming from the same source process and going to the same destination program. The
user datagrams are not numbered. Also, unlike TCP, there is no connection establishment and
no connection termination. This means that each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages. The lack of flow control means that the
process using UDP should provide for this service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum. This means that the
sender does not know if a message has been lost or duplicated. When the receiver detects an
error through the checksum, the user datagram is silently discarded. The lack of error control
means that the process using UDP should provide for this service, if needed.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control. UDP assumes
that the packets sent are small and sporadic and cannot create congestion in the network.
Encapsulation and Decapsulation
To send a message from one process to another, the UDP protocol encapsulates and
decapsulates messages.

TRANSMISSION CONTROL PROTOCOL


Transmission Control Protocol (TCP) is a connection-oriented, reliable protocol. TCP
explicitly
defines connection establishment, data transfer, and connection teardown phases to provide a
connection-oriented service.
Source port address. This is a 16-bit field that defines the port number of the application
program
in the host that is sending the segment.
Destination port address. This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment.
Sequence number. This 32-bit field defines the number assigned to the first byte of data
contained in this segment. TCP is a stream transport protocol. To ensure connectivity, each byte
to be transmitted is numbered. The sequence number tells the destination which byte in this
sequence is the first byte in the segment. During connection establishment (discussed later)
each party uses a random number generator to create an initial sequence number (ISN), which
is usually different in each direction.
Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has
successfully received byte number x from the other party, it returns x+1 as the acknowledgment
number. Acknowledgment and data can be piggybacked together.
Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The
length of the header can be between 20 and 60 bytes. Therefore, the value of this field is always
between 5 (5x4=20) and 15 (15x4=60).
Control. This field defines 6 different control bits or flags, as shown in Figure 24.8. One or
more of these bits can be set at a time.

Window size. This field defines the window size of the sending TCP in bytes. Note that the
length
of this field is 16 bits, which means that the maximum size of the window is 65,535 bytes. This
value is normally referred to as the receiving window (rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this case.
Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP
Urgent pointer. This 16-bit field, which is valid only if the urgent flag is set, is used when the
segment contains urgent data.
Options. There can be up to 40 bytes of optional information in the TCP header.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy