Chapter 5 Yearwise Marking
Chapter 5 Yearwise Marking
Ans.: The differences between leaky bucket and token bucket are given below:
The token bucket is an algorithm used in packet switched computer networks and
telecommunications networks. It can be used to check that data transmissions, in the form
of packets, conform to defined limits on bandwidth and burstiness (a measure of the
unevenness or variations in the traffic flow).
TCP Header[5,4]
The transmission control protocol (TCP) is one of the most important protocols of internet
protocols suite. It is most widely used protocol for data transmission in communication
network such as internet. The length of TCP header is minimum 20 bytes long and maximum
60 bytes.
Explain UDP segment structures. Illustrate your answer with appropriate figures.[8]
Ans: The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol
available of the TCP/IP protocol suite. It involves minimum amount of communication
mechanism. UDP is said to be an unreliable transport protocol but it uses IP services which
provides best effort delivery mechanism. In UDP, the receiver does not generate an
acknowledgement of packet received and in turn, the sender does not wait for any
acknowledgement of packet sent. This shortcoming makes this protocol unreliable as well as easier
on processing.
Features
• UDP is used when acknowledgement of data does not hold any significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is stateless.
• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.
UDP Header UDP header is as simple as its function.
UDP header contains four main parameters:
• Source Port - This 16 bits information is used to identify the source port of the packet.
• Destination Port - This 16 bits information, is used identify application level service on
destination machine.
• Length - Length field specifies the entire length of UDP packet (including header). It is 16-bits
field and minimum value is 8-byte, i.e. the size of UDP header itself.
• Checksum - This field stores the checksum value generated by the sender before sending. IPv4
has this field as optional so when checksum field does not contain any value it is made 0 and all
its bits are set to zero.
UDP application
Here are few applications where UDP is used to transmit data:
• Domain Name Services
• Simple Network Management Protocol
• Trivial File Transfer Protocol
• Routing Information Protocol
Why port number is used in networking? What are the services of transport layer? [1+2]
A port number is a way to identify a specific process to which an Internet or other network message
is to be forwarded when it arrives at a server. For the Transmission Control Protocol and the User
Datagram Protocol, a port number is a 16-bit integer that is put in the header appended to a message
unit. This port number is passed logically between client and server transport layers and physically
between the transport layer and the Internet Protocol layer and forwarded on.
So, port number is used in networking.
2nd part
Transport layer services are conveyed to an application via a programming interface to the
transport layer protocols. The services may include the following features:
• Connection-oriented communication:
It is normally easier for an application to interpret a connection as a Data Stream rather
than having to deal with the underlying connection-less models, such as the datagram
model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP).
Same order delivery:
The network layer doesn't generally guarantee that packets of data will arrive in the same
order that they were sent, but often this is a desirable feature. This is usually done through
the use of segment numbering, with the receiver passing them to the application in order.
This can cause head-of-line blocking
Reliability
Packets may be lost during transport due to network congestion and errors. By means of
an error detection code, such as a checksum, the transport protocol may check that the data
is not corrupted, and verify correct receipt by sending an ACK or NACK message to the
sender. Automatic repeat request schemes may be used to retransmit lost or corrupted data.
Flow control:
The rate of data transmission between two nodes must sometimes be managed to prevent a fast
sender from transmitting more data than can be supported by the receiving data buffer, causing
a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun.
Congestion avoidance:
Congestion control can control traffic entry into a telecommunications network, so as to avoid
congestive collapse by attempting to avoid oversubscription of any of the processing or link
capabilities of the intermediate nodes and networks and taking resource reducing steps, such
as reducing the rate of sending packets. For example, automatic repeat requests may keep the
network in a congested state; this situation can be avoided by adding congestion avoidance to
the flow control, including slow-start. This keeps the bandwidth consumption at a low level in
the beginning of the transmission, or after packet retransmission.
Multiplexing
Ports can provide multiple endpoints on a single node. For example, the name on a postal address
is a kind of multiplexing, and distinguishes between different recipients of the same location.
Computer applications will each listen for information on their own ports, which enables the use
of more than one network service at the same time. It is part of the transport layer in the TCP/IP
model, but of the session layer in the OSI model.
For client-server application over TCP, why must the server program be executed
before the client program? TCP is known as reliable process how, describe reliability
is provided by TCP.
TCP provides for the recovery of segments that get lost, are damaged, duplicated or
received out of their correct order. TCP is described as a 'reliable' protocol because it
attempts to recover from these errors. ... TCP also requires that an acknowledge message
be returned after transmitting data.
What is a TCP connection? Explain how a TCP connection can be gracefully terminated.
Solution: Transmission Control Protocol (TCP) is one of the most important protocols of Internet
Protocols suite. It is most widely used protocol for data transmission in communication network
such as internet.
There isn't a physical connection from the client to server. Is this connection just the client's socket
being linked with the new socket created by the server after the three-way-handshake? Thereafter
once the "connection" is set up, the sockets on either ends of the connection then know where to
send their packets.
the capacity.
For the client server application over TCP ,why most of server program be executed before
the client program?[3,3]
A client-server application run over the TCP, server program is executed first, because the server
must accept the request from the client and ready to execute the client's program. If the server is
not ready (not running) then the client fails to establish the connection with the server.
TCP is described as a 'reliable' protocol because it attempts to recover from these errors. The
sequencing is handled by labling every segment with a sequence number. These sequence numbers
permit TCP to detect dropped segments. TCP also requires that an acknowledge message be
returned after transmitting data.
TCP is known as a reliable process describe how reliability is provided by TCP?
TCP provides for the recovery of segments that get lost, are damaged, duplicated or received out
of their correct order. TCP is described as a 'reliable' protocol because it attempts to recover from
these errors. The sequencing is handled by labling every segment with a sequence number. These
sequence numbers permit TCP to detect dropped segments. TCP also requires that an acknowledge
message be returned after transmitting data. To verify that the segments are not damaged, a CRC
check is performed on every segment that is sent, and every segment that is received. Because
every packet has a time to live field, and that field is decremented during each forwarding cycle,
TCP must re-calculate the CRC value for the segment at each hop. Segments that do not match the
CRC check are discarded.
Discuss about the network congestion? Explain how different parameters effect the
congestion. [2+2]
Network congestion in data networking and queueing theory is the reduced quality of service
that occurs when a network node or link is carrying more data than it can handle. Typical effects
include queueing delay, packet loss or the blocking of new connections. A consequence of
congestion is that an incremental increase in offered load leads either only to a small increase or
even a decrease in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss due to
congestion can increase congestion, even after the initial load has been reduced to a level that
would not normally have induced network congestion. Such networks exhibit two stable states
under the same level of load. The stable state with low throughput is known as congestive
collapse.
The different parameters effecting the congestion are:
• Queuing -- Buffers on network devices are managed with various queuing techniques. And,
properly managed queues can minimize dropped packets and network congestion, as well as
improve network performance.
• Congestion Control and Avoidance in TCP — designed to prevent overflowing the receiver's
buffers, not the buffers of network nodes. Slow start congestion control — a technique that
requires a host to start its transmissions slowly and then build up to the point where congestion
starts to occur.
• Fast retransmit and fast recovery — algorithms designed to minimize the effect that dropping
packets has on network throughput.
• Active queue management (AQM) -- a technique in which routers actively drop packets from
queues as a signal to senders that they should slow down
• RED (Random Early Discard) – one of active queues management (AQM) scheme uses
statistical methods to drop packets in a "probabilistic" way before queues overflow.
Connection establishment:
1. SYN: The active open is performed by the client sending a SYN to the server. The
client/host A ( see figure below) sets the segment’s sequence number to a random
value X.
2. SYN-ACK: In response, the server/host B replies with a SYN-ACK. The
acknowledgment number is set to one more than the received sequence number
(X + 1), and the sequence number that the server/host B chooses for the packet is
another random number, Y.
3. ACK: Finally, the client/host A sends an ACK back to the server/host B. The
sequence number is set to the received acknowledgement value i.e. X + 1, and the
acknowledgement number is set to one more than the received sequence number
i.e. Y + 1.
At this point, both the client and server have received an acknowledgment of the
connection. The steps 1, 2 establish the connection parameter (sequence number) for
one direction and it is acknowledged. The steps 2, 3 establish the connection parameter
(sequence number) for the other direction and it is acknowledged. With these, a full-
duplex communication is established.
Figure: Tcp connection establishment
Connection termination:
The connection termination phase uses a four-way handshake, with each side of the
connection terminating independently. When an endpoint wishes to stop its half of the
connection, it transmits a FIN packet, which the other end acknowledges with an ACK.
Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP
endpoint. After both FIN/ACK exchanges are concluded, the side which sent the first
FIN before receiving one waits for a timeout before finally closing the connection,
during which time the local port is unavailable for new connections; this prevents
confusion due to delayed packets being delivered during subsequent connections.
A connection can be “half-open”, in which case one side has terminated its end, but the
other has not. The side that has terminated can no longer send any data into the
connection, but the other side can. The terminating side should continue reading the data
until the other side terminates as well.
It is also possible to terminate the connection by a 3-way handshake, when host A sends
a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host
A replies with an ACK. This is perhaps the most common method.
It is possible for both hosts to send FINs simultaneously then both just have to ACK.
This could possibly be considered a 2-way handshake since the FIN/ACK sequence is
done in parallel for both directions.
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-
UX do. If such a host actively closes a connection but still has not read all the incoming
data the stack already received from the link, this host sends a RST instead of a FIN.
This allows a TCP application to be sure the remote application has read all the data the
former sent—waiting the FIN from the remote side, when it actively closes the
connection. However, the remote TCP stack cannot distinguish between a Connection
Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all
the data it received, but that the application still didn’t read.
Define socket programming. How web server communication and file server
communication are possible in network. Explain with used protocols.
Ans: Socket programming is a way of connecting two nodes on a network to communicate with
each other. One socket(node) listens on a particular port at an IP, while other socket reaches out
to the other to form a connection. Server forms the listener socket while client reaches out to the
server.
Sockets provide the communication mechanism between two computers using TCP. A client
program creates a socket on its end of the communication and attempts to connect that socket to
a server. When the connection is made, the server creates a socket object on its end of the
communication. The client and the server can now communicate by writing to and reading from
the socket. The java.net.Socket class represents a socket, and the java.net.ServerSocket class
provides a mechanism for the server program to listen for clients and establish connections with
them.
The following steps occur when establishing a TCP connection between two computers using
sockets –
• The server instantiates a ServerSocket object, denoting which port number
communication is to occur on.
• The server invokes the accept () method of the ServerSocket class. This method waits
until a client connects to the server on the given port.
• After the server is waiting, a client instantiates a Socket object, specifying the server
name and the port number to connect to.
• The constructor of the Socket class attempts to connect the client to the specified server
and the port number. If communication is established, the client now has a Socket object capable
of communicating with the server.
• On the server side, the accept() method returns a reference to a new socket on the server
that is connected to the client's socket.
After the connections are established, communication can occur using I/O streams. Each socket
has both an Output Stream and an Input Stream. The client's Output Stream is connected to the
server's Input Stream, and the client's Input Stream is connected to the server's Output Stream.
TCP is a two-way communication protocol, hence data can be sent across both streams at the
same time. Following are the useful classes providing complete set of methods to implement
sockets.
FTP The File Transfer Protocol (FTP) is a standard network protocol used to transfer computer
files between a client and server on a computer network. FTP is built on client-server model
architecture and uses separate control and data connections between the client and the server.
File Transfer Protocol (FTP) is a standard Internet protocol for transmitting files between
computers on the Internet over TCP/IP connections.
FTP is a client-server protocol that relies on two communications channels between client and
server: a command channel for controlling the conversation and a data channel for transmitting
file content. Clients initiate conversations with servers by requesting to download a file. Using
FTP, a client can upload, download, delete, and rename, move and copy files on a server. A user
typically needs to log on to the FTP server, although some servers make some or all of their
content available without login, also known as anonymous FTP.
FTP sessions work in passive or active modes. In active mode, after a client initiates a session via
a command channel request, the server initiates a data connection back to the client and begins
transferring data. In passive mode, the server instead uses the command channel to send the
client the information it needs to open a data channel. Because passive mode has the client
initiating all connections, it works well across firewalls and Network Address Translation (NAT)
gateways.
For the client-server application over TCP, why must the server program be executed
before the client program? TCP is known as reliable process, describe how reliability is
provided by TCP.
Ans: In a client-server application over TCP, the server program must be executed before the
client program due to the following reasons:
TCP is known as reliable process. TCP provides for the recovery of segments that get lost, are
damaged, duplicated or received out of their correct order. TCP is described as a 'reliable' protocol
because it attempts to recover from these errors. The sequencing is handled by labeling every
segment with a sequence number. These sequence numbers permit TCP to detect dropped
segments. TCP also requires that an acknowledge message be returned after transmitting data.
To verify that the segments are not damaged, a CRC check is performed on every segment that is
sent, and every segment that is received. Because every packet has a time to live field, and that
field is decremented during each forwarding cycle, TCP must re-calculate the CRC value for the
segment at each hop. Segments that do not match the CRC check are discarded. Hence, TCP can
be stated as a reliable process.
6) DHCP decline –
If DHCP client determines the offered configuration parameters are different or
invalid, it sends DHCP decline message to the server .When there is a reply to the
gratuitous ARP by any host to the client, the client sends DHCP decline message to the
server showing the offered IP address is already in use.
7) DHCP release –
A DHCP client sends DHCP release packet to server to release IP address and cancel
any remaining lease time.
8) DHCP inform –
If a client address has obtained IP address manually then the client uses a DHCP inform
to obtain other local configuration parameters, such as domain name. In reply to the
dhcp inform message, DHCP server generates DHCP ack message with local
configuration suitable for the client without allocating a new IP address. This DHCP
ack message is unicast to the client.
Note – All the messages can be unicast also by dhcp relay agent if the server is
present in different network.
Advantages –
Disadvantages –
Discuss about the network congestion? Explain how different network parameters effect the
congestion
Network congestion in data networking and queueing theory is the reduced quality of service that
occurs when a network node or link is carrying more data than it can handle. Typical effects include
queueing delay, packet loss or the blocking of new connections.Congestion, in the context of
networks, refers to a network state where a node or link carries so much data that it may deteriorate
network service quality, resulting in queuing delay, frame or data packet loss and the blocking of
new connections. In a congested network, response time slows with reduced network throughput.
Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity.
Data packet loss from congestion is partially countered by aggressive network protocol
retransmission, which maintains a network congestion state after reducing the initial data load.
This can create two stable states under the same data traffic load - one dealing with the initial load
and the other maintaining reduced network throughput.
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.
Cause of Congestion
Over-subscription
Poor network design/mis-configuration
Over-utilized devices
Faulty devices
Security attack
as 30 minutes. If the link alters state, the device detected the alteration generates and propagate an
update message regarding that link to all routers. Then each router takes a copy of the update
message and update its routing table and forwards the message to all neighbouring router.
This flooding of the update message is needed to ensure that all routers update their database before
creating an update routing table that reflects the new technology. OSPF protocol is the example
link state routing.