0% found this document useful (0 votes)
53 views

Chapter 5 Yearwise Marking

The token bucket algorithm can be used to regulate network traffic by allowing bursts of packets up to a defined limit. It works by conceptually keeping tokens in a bucket that refill at a defined rate, and each packet uses a token. If no tokens are available, packets are considered non-conforming. It allows bursts as long as the bucket is not empty, implementing both rate limiting and allowance for burstiness.

Uploaded by

Honor Swift
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Chapter 5 Yearwise Marking

The token bucket algorithm can be used to regulate network traffic by allowing bursts of packets up to a defined limit. It works by conceptually keeping tokens in a bucket that refill at a defined rate, and each packet uses a token. If no tokens are available, packets are considered non-conforming. It allows bursts as long as the bucket is not empty, implementing both rate limiting and allowance for burstiness.

Uploaded by

Honor Swift
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Compare between leaky bucket

Ans.: The differences between leaky bucket and token bucket are given below:

Leaky Bucket Token Bucket


Its parameter is rate. Its parameters are rate and burstiness.
It smooths out traffic by passing packets Token bucket smooths traffic but permits burstiness
only when there is a token and doesn’t which is equivalent to the number of tokens
permit burstiness. accumulated in the bucket.
It discards packets for which no tokens It discards tokens when bucket is full but never discards
are available. (No concept of queue) packets.
Applications: traffic shaping or traffic Application: network traffic shaping or rate limiting.
policing.

How token bucket algorithm with the operation. [4,4,3,5,5]

The token bucket is an algorithm used in packet switched computer networks and
telecommunications networks. It can be used to check that data transmissions, in the form
of packets, conform to defined limits on bandwidth and burstiness (a measure of the
unevenness or variations in the traffic flow).

The token bucket algorithm can be conceptually understood as follows:

• A token is added to the bucket every seconds.


• The bucket can hold at the most tokens. If a token arrives when the bucket is full,
it is discarded.
• When a packet (network layer PDU) of n bytes arrives, o if at least n tokens are in
the bucket, n tokens are removed from the bucket, and the packet is sent to the
network.
o if fewer than n tokens are available, no tokens are removed from the bucket,
and the packet is considered to be non-conformant.
The token bucket algorithm allows idle hosts to accumulate credit for the future in the form of
tokens i.e. for each tick of the clock, the system sends n tokens to the bucket and the system
removes one token for every cell (or byte) of data sent. In other words, the host can send bursty
data as long as the bucket is not empty. The token bucket can easily be implemented with a counter.
The token is initialized to zero where each time a token is added, the counter is incremented by 1.
Again, each time a unit of data is sent, the counter is decremented by 1 and when the counter is
zero, the host cannot send data.

TCP Header[5,4]
The transmission control protocol (TCP) is one of the most important protocols of internet
protocols suite. It is most widely used protocol for data transmission in communication
network such as internet. The length of TCP header is minimum 20 bytes long and maximum
60 bytes.

Its different components are described below:


- Source Port (16-bits): It identifies source port of the application process on the
sending device.
- Destination Port (16-bits) - It identifies destination port of the application process
on the receiving device.
- Sequence Number (32-bits) - Sequence number of data bytes of a segment in a
session.
- Acknowledgement Number (32-bits) - When ACK flag is set, this number contains
the next sequence number of the data byte expected and works as acknowledgement
of the previous data received.
- Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words)
and the offset of data in current packet in the whole TCP segment.
- Reserved (3-bits) - Reserved for future use and all are set zero by default.
- Flags (1-bit each)
o NS - Nonce Sum bit is used by Explicit Congestion Notification signaling
process.
o CWR - When a host receives packet with ECE bit set, it sets Congestion
Windows Reduced to acknowledge that ECE received.
o ECE -It has two meanings:
 If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
 If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG - It indicates that Urgent Pointer field has significant data and should
be processed.
o ACK - It indicates that Acknowledgement field has significance. If ACK is
cleared to 0, it indicates that packet does not contain any acknowledgement.
o PSH - When set, it is a request to the receiving station to PUSH data (as soon
as it comes) to the receiving application without buffering it.
o RST - Reset flag has the following features:
 It is used to refuse an incoming connection.
 It is used to reject a segment.
 It is used to restart a connection.
o SYN - This flag is used to set up a connection between hosts.
o FIN - This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers,
they are processed in correct order.
- Windows Size - This field is used for flow control between two stations and indicates
the amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how
much data is the receiver expecting.
- Checksum - This field contains the checksum of Header, Data and Pseudo Headers.
- Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.
- Options - It facilitates additional options which are not covered by the regular
header. Option field is always described in 32-bit words. If this field contains data
less than 32-bit, padding is used to cover the remaining bits to reach 32-bit boundary.

Explain UDP segment structures. Illustrate your answer with appropriate figures.[8]

Ans: The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol
available of the TCP/IP protocol suite. It involves minimum amount of communication
mechanism. UDP is said to be an unreliable transport protocol but it uses IP services which
provides best effort delivery mechanism. In UDP, the receiver does not generate an
acknowledgement of packet received and in turn, the sender does not wait for any
acknowledgement of packet sent. This shortcoming makes this protocol unreliable as well as easier
on processing.
Features
• UDP is used when acknowledgement of data does not hold any significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is stateless.
• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.
UDP Header UDP header is as simple as its function.
UDP header contains four main parameters:
• Source Port - This 16 bits information is used to identify the source port of the packet.
• Destination Port - This 16 bits information, is used identify application level service on
destination machine.
• Length - Length field specifies the entire length of UDP packet (including header). It is 16-bits
field and minimum value is 8-byte, i.e. the size of UDP header itself.

• Checksum - This field stores the checksum value generated by the sender before sending. IPv4
has this field as optional so when checksum field does not contain any value it is made 0 and all
its bits are set to zero.
UDP application
Here are few applications where UDP is used to transmit data:
• Domain Name Services
• Simple Network Management Protocol
• Trivial File Transfer Protocol
• Routing Information Protocol

Why port number is used in networking? What are the services of transport layer? [1+2]

A port number is a way to identify a specific process to which an Internet or other network message
is to be forwarded when it arrives at a server. For the Transmission Control Protocol and the User
Datagram Protocol, a port number is a 16-bit integer that is put in the header appended to a message
unit. This port number is passed logically between client and server transport layers and physically
between the transport layer and the Internet Protocol layer and forwarded on.
So, port number is used in networking.
2nd part
Transport layer services are conveyed to an application via a programming interface to the
transport layer protocols. The services may include the following features:
• Connection-oriented communication:
It is normally easier for an application to interpret a connection as a Data Stream rather
than having to deal with the underlying connection-less models, such as the datagram
model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP).
 Same order delivery:

The network layer doesn't generally guarantee that packets of data will arrive in the same
order that they were sent, but often this is a desirable feature. This is usually done through
the use of segment numbering, with the receiver passing them to the application in order.
This can cause head-of-line blocking

 Reliability

 Packets may be lost during transport due to network congestion and errors. By means of
an error detection code, such as a checksum, the transport protocol may check that the data
is not corrupted, and verify correct receipt by sending an ACK or NACK message to the
sender. Automatic repeat request schemes may be used to retransmit lost or corrupted data.

 Flow control:

The rate of data transmission between two nodes must sometimes be managed to prevent a fast
sender from transmitting more data than can be supported by the receiving data buffer, causing
a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun.

 Congestion avoidance:

Congestion control can control traffic entry into a telecommunications network, so as to avoid
congestive collapse by attempting to avoid oversubscription of any of the processing or link
capabilities of the intermediate nodes and networks and taking resource reducing steps, such
as reducing the rate of sending packets. For example, automatic repeat requests may keep the
network in a congested state; this situation can be avoided by adding congestion avoidance to
the flow control, including slow-start. This keeps the bandwidth consumption at a low level in
the beginning of the transmission, or after packet retransmission.

Multiplexing

Ports can provide multiple endpoints on a single node. For example, the name on a postal address
is a kind of multiplexing, and distinguishes between different recipients of the same location.
Computer applications will each listen for information on their own ports, which enables the use
of more than one network service at the same time. It is part of the transport layer in the TCP/IP
model, but of the session layer in the OSI model.

For client-server application over TCP, why must the server program be executed
before the client program? TCP is known as reliable process how, describe reliability
is provided by TCP.

In a client-server application over TCP (Transmission Control Protocol), the server


program must be executed before the client program due to the following reasons: As the
TCP is a connection-oriented protocol, a connection must be established between the client
and the server before they communicate to each other.

TCP provides for the recovery of segments that get lost, are damaged, duplicated or
received out of their correct order. TCP is described as a 'reliable' protocol because it
attempts to recover from these errors. ... TCP also requires that an acknowledge message
be returned after transmitting data.

What is a TCP connection? Explain how a TCP connection can be gracefully terminated.
Solution: Transmission Control Protocol (TCP) is one of the most important protocols of Internet
Protocols suite. It is most widely used protocol for data transmission in communication network
such as internet.
There isn't a physical connection from the client to server. Is this connection just the client's socket
being linked with the new socket created by the server after the three-way-handshake? Thereafter
once the "connection" is set up, the sockets on either ends of the connection then know where to
send their packets.

Connection termination in TCP using 3 way handshaking:


1. In a normal situation, the client TCP, after receiving a close command from the client
process, sends the first segment, a FIN segment in which the FIN flag is set. Note that a
FIN segment can include the last chunk of data sent by the client, or it can be just a control
segment. If it is only a control segment, it consumes only one sequence number.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment
from the client and at the same time to announce the closing of the connection in the other
direction. This segment can also contain the last chunk of data from the server. If it does
not carry data, it consumes only one sequence number.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number, which
is 1 plus the sequence number received in the FIN segment from the server. This segment

cannot carry data and


consumes no sequence numbers.

Explain the difference between TCP and UDP[3,4,5]

Difference between TCP and UDP is listed below:


s handling using token bucket:
Congestion in a network may occur if the load on the network (no. of packet send) is greater
that the capacity of the network (no. of packet that can be handle). Congestion control
refers to the mechanism and technique to control the congestion and keep the load below
S.N TCP UDP

1. TCP is a connection-oriented UDP is the Datagram oriented protocol.


protocol. Connection-orientation This is because there is no overhead for
means that the communicating opening a connection, maintaining a
devices should establish a connection, and terminating a connection.
connection before transmitting UDP is efficient for broadcast and multicast
data and should close the type of network transmission.
connection after transmitting the
data.
2. TCP is reliable as it guarantees The delivery of data to the destination
delivery of data to the destination cannot be guaranteed in UDP.
router.
3. TCP provides extensive error UDP has only the basic error checking
checking mechanisms. It is because mechanism using checksums.
it provides flow control and
acknowledgment of data.
4. Sequencing of data is a feature of There is no sequencing of data in UDP. If
Transmission Control Protocol ordering is required, it has to be managed by
(TCP). this means that packets the application layer.
arrive in-order at the receiver.
5. TCP is slower than UPD UDP is faster, simpler and more efficient
than TCP.
6. TCP has a (20-80) bytes variable UDP has a 8 bytes fixed length header.
length header.
7. TCP doesn’t supports UDP supports Broadcasting.
Broadcasting.
8. TCP is heavy-weight. UDP is lightweight.

the capacity.

For the client server application over TCP ,why most of server program be executed before
the client program?[3,3]

A client-server application run over the TCP, server program is executed first, because the server
must accept the request from the client and ready to execute the client's program. If the server is
not ready (not running) then the client fails to establish the connection with the server.
TCP is described as a 'reliable' protocol because it attempts to recover from these errors. The
sequencing is handled by labling every segment with a sequence number. These sequence numbers
permit TCP to detect dropped segments. TCP also requires that an acknowledge message be
returned after transmitting data.
TCP is known as a reliable process describe how reliability is provided by TCP?

TCP provides for the recovery of segments that get lost, are damaged, duplicated or received out
of their correct order. TCP is described as a 'reliable' protocol because it attempts to recover from
these errors. The sequencing is handled by labling every segment with a sequence number. These
sequence numbers permit TCP to detect dropped segments. TCP also requires that an acknowledge
message be returned after transmitting data. To verify that the segments are not damaged, a CRC
check is performed on every segment that is sent, and every segment that is received. Because
every packet has a time to live field, and that field is decremented during each forwarding cycle,
TCP must re-calculate the CRC value for the segment at each hop. Segments that do not match the
CRC check are discarded.

Discuss about the network congestion? Explain how different parameters effect the
congestion. [2+2]
 Network congestion in data networking and queueing theory is the reduced quality of service
that occurs when a network node or link is carrying more data than it can handle. Typical effects
include queueing delay, packet loss or the blocking of new connections. A consequence of
congestion is that an incremental increase in offered load leads either only to a small increase or
even a decrease in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss due to
congestion can increase congestion, even after the initial load has been reduced to a level that
would not normally have induced network congestion. Such networks exhibit two stable states
under the same level of load. The stable state with low throughput is known as congestive
collapse.
The different parameters effecting the congestion are:

• Queuing -- Buffers on network devices are managed with various queuing techniques. And,
properly managed queues can minimize dropped packets and network congestion, as well as
improve network performance.

• Congestion control in frame relay — implements two congestion avoidance mechanisms:


BECN (backward explicit congestion notification), And FECN (forward explicit congestion
notification)

• Congestion Control and Avoidance in TCP — designed to prevent overflowing the receiver's
buffers, not the buffers of network nodes. Slow start congestion control — a technique that
requires a host to start its transmissions slowly and then build up to the point where congestion
starts to occur.

• Fast retransmit and fast recovery — algorithms designed to minimize the effect that dropping
packets has on network throughput.
• Active queue management (AQM) -- a technique in which routers actively drop packets from
queues as a signal to senders that they should slow down

• RED (Random Early Discard) – one of active queues management (AQM) scheme uses
statistical methods to drop packets in a "probabilistic" way before queues overflow.

• ECN (explicit congestion notification) —a technique applicable in congestion avoidance


mechanism.

6. How connection is established and released in TCP. [5,4]

Connection establishment:

To establish a connection, TCP uses a three-way handshake. Before a client attempts to


connect with a server, the server must first bind to and listen at a port to open it up for
connections: this is called a passive open. Once the passive open is established, a client
may initiate an active open. To establish a connection, the three-way (or 3-step)
handshake occurs:

1. SYN: The active open is performed by the client sending a SYN to the server. The
client/host A ( see figure below) sets the segment’s sequence number to a random
value X.
2. SYN-ACK: In response, the server/host B replies with a SYN-ACK. The
acknowledgment number is set to one more than the received sequence number
(X + 1), and the sequence number that the server/host B chooses for the packet is
another random number, Y.
3. ACK: Finally, the client/host A sends an ACK back to the server/host B. The
sequence number is set to the received acknowledgement value i.e. X + 1, and the
acknowledgement number is set to one more than the received sequence number
i.e. Y + 1.

At this point, both the client and server have received an acknowledgment of the
connection. The steps 1, 2 establish the connection parameter (sequence number) for
one direction and it is acknowledged. The steps 2, 3 establish the connection parameter
(sequence number) for the other direction and it is acknowledged. With these, a full-
duplex communication is established.
Figure: Tcp connection establishment

Connection termination:

The connection termination phase uses a four-way handshake, with each side of the
connection terminating independently. When an endpoint wishes to stop its half of the
connection, it transmits a FIN packet, which the other end acknowledges with an ACK.
Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP
endpoint. After both FIN/ACK exchanges are concluded, the side which sent the first
FIN before receiving one waits for a timeout before finally closing the connection,
during which time the local port is unavailable for new connections; this prevents
confusion due to delayed packets being delivered during subsequent connections.

A connection can be “half-open”, in which case one side has terminated its end, but the
other has not. The side that has terminated can no longer send any data into the
connection, but the other side can. The terminating side should continue reading the data
until the other side terminates as well.

It is also possible to terminate the connection by a 3-way handshake, when host A sends
a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host
A replies with an ACK. This is perhaps the most common method.
It is possible for both hosts to send FINs simultaneously then both just have to ACK.
This could possibly be considered a 2-way handshake since the FIN/ACK sequence is
done in parallel for both directions.
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-
UX do. If such a host actively closes a connection but still has not read all the incoming
data the stack already received from the link, this host sends a RST instead of a FIN.
This allows a TCP application to be sure the remote application has read all the data the
former sent—waiting the FIN from the remote side, when it actively closes the
connection. However, the remote TCP stack cannot distinguish between a Connection
Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all
the data it received, but that the application still didn’t read.

Figure: Tcp connection termination

Figure: The token bucket algorithm before and after

Explain the TCP datagram format in detail. [5]


Ans:

The TCP datagram format can be explained as belows:


Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
• Source Port (16-bits) - It identifies source port of the application process on the sending device.
• Destination Port (16-bits) - It identifies destination port of the application process on the
receiving device.
• Sequence Number (32-bits) - Sequence number of data bytes of a segment in a session.
• Acknowledgement Number (32-bits) - When ACK flag is set, this number contains the next
sequence number of the data byte expected and works as acknowledgement of the previous data
received.
 Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words) and the
offset of data in current packet in the whole TCP segment.
 Reserved (3-bits) - Reserved for future use and all are set zero by default
 Flags (1-bit each)
o NS - Nonce Sum bit is used by Explicit Congestion Notification signaling process.
o CWR - When a host receives packet with ECE bit set, it sets Congestion Windows
Reduced to acknowledge that ECE received.
o ECE -It has two meanings:
▪ If SYN bit is clear to 0, then ECE means that the IP packet has its CE (congestion
experience) bit set.
▪ If SYN bit is set to 1, ECE means that the device is ECT capable. o URG - It
indicates that Urgent Pointer field has significant data and should be processed.
o ACK - It indicates that Acknowledgement field has significance. If ACK is cleared to 0, it
indicates that packet does not contain any acknowledgement.
o PSH - When set, it is a request to the receiving station to PUSH data (as soon as it
comes) to the receiving application without buffering it.
o RST - Reset flag has the following features:
▪ It is used to refuse an incoming connection.
▪ It is used to reject a segment. ▪ It is used to restart a connection.
o SYN - This flag is used to set up a connection between hosts.
o FIN - This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers, they are processed
in correct order.
• Windows Size - This field is used for flow control between two stations and indicates the
amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much data is the
receiver expecting.
• Checksum - This field contains the checksum of Header, Data and Pseudo Headers.
• Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.
• Options - It facilitates additional options which are not covered by the regular header. Option
field is always described in 32-bit words. If this field contains data less than 32- bit, padding is
used to cover the remaining bits to reach 32-bit boundary.

Define socket programming. How web server communication and file server
communication are possible in network. Explain with used protocols.

Ans: Socket programming is a way of connecting two nodes on a network to communicate with
each other. One socket(node) listens on a particular port at an IP, while other socket reaches out
to the other to form a connection. Server forms the listener socket while client reaches out to the
server.
Sockets provide the communication mechanism between two computers using TCP. A client
program creates a socket on its end of the communication and attempts to connect that socket to
a server. When the connection is made, the server creates a socket object on its end of the
communication. The client and the server can now communicate by writing to and reading from
the socket. The java.net.Socket class represents a socket, and the java.net.ServerSocket class
provides a mechanism for the server program to listen for clients and establish connections with
them.
The following steps occur when establishing a TCP connection between two computers using
sockets –
• The server instantiates a ServerSocket object, denoting which port number
communication is to occur on.
• The server invokes the accept () method of the ServerSocket class. This method waits
until a client connects to the server on the given port.
• After the server is waiting, a client instantiates a Socket object, specifying the server
name and the port number to connect to.
• The constructor of the Socket class attempts to connect the client to the specified server
and the port number. If communication is established, the client now has a Socket object capable
of communicating with the server.
• On the server side, the accept() method returns a reference to a new socket on the server
that is connected to the client's socket.
After the connections are established, communication can occur using I/O streams. Each socket
has both an Output Stream and an Input Stream. The client's Output Stream is connected to the
server's Input Stream, and the client's Input Stream is connected to the server's Output Stream.
TCP is a two-way communication protocol, hence data can be sent across both streams at the
same time. Following are the useful classes providing complete set of methods to implement
sockets.
FTP The File Transfer Protocol (FTP) is a standard network protocol used to transfer computer
files between a client and server on a computer network. FTP is built on client-server model
architecture and uses separate control and data connections between the client and the server.
File Transfer Protocol (FTP) is a standard Internet protocol for transmitting files between
computers on the Internet over TCP/IP connections.
FTP is a client-server protocol that relies on two communications channels between client and
server: a command channel for controlling the conversation and a data channel for transmitting
file content. Clients initiate conversations with servers by requesting to download a file. Using
FTP, a client can upload, download, delete, and rename, move and copy files on a server. A user
typically needs to log on to the FTP server, although some servers make some or all of their
content available without login, also known as anonymous FTP.
FTP sessions work in passive or active modes. In active mode, after a client initiates a session via
a command channel request, the server initiates a data connection back to the client and begins
transferring data. In passive mode, the server instead uses the command channel to send the
client the information it needs to open a data channel. Because passive mode has the client
initiating all connections, it works well across firewalls and Network Address Translation (NAT)
gateways.
For the client-server application over TCP, why must the server program be executed
before the client program? TCP is known as reliable process, describe how reliability is
provided by TCP.
Ans: In a client-server application over TCP, the server program must be executed before the
client program due to the following reasons:

 As the TCP is connection-oriented protocol, a connection must be established between


the server and the client before they communicate to each other.
 A client-server run over the TCP, server program is executed first, because the server must
accept the request from the client and ready to execute the client’s program.
 If the server is not ready (not running) then the client fails to establish the connection
with the server.

TCP is known as reliable process. TCP provides for the recovery of segments that get lost, are
damaged, duplicated or received out of their correct order. TCP is described as a 'reliable' protocol
because it attempts to recover from these errors. The sequencing is handled by labeling every
segment with a sequence number. These sequence numbers permit TCP to detect dropped
segments. TCP also requires that an acknowledge message be returned after transmitting data.

To verify that the segments are not damaged, a CRC check is performed on every segment that is
sent, and every segment that is received. Because every packet has a time to live field, and that
field is decremented during each forwarding cycle, TCP must re-calculate the CRC value for the
segment at each hop. Segments that do not match the CRC check are discarded. Hence, TCP can
be stated as a reliable process.

Write short notes on:


DHCP [4+4]
Dynamic Host Configuration Protocol (DHCP) is an application layer protocol which is
used to provide:
 Subnet Mask (Option 1 – e.g., 255.255.255.0)
 Router Address (Option 3 – e.g., 192.168.1.1)
 DNS Address (Option 6 – e.g., 8.8.8.8)
 Vendor Class Identifier (Option 43 – e.g., ‘unifi’ = 192.168.1.9 ##where unifi
= controller)
DHCP is based on a client-server model and based on discovery, offer, request, and ACK.
DHCP port number for server is 67 and for the client is 68. It is a Client server protocol
which uses UDP services. IP address is assigned from a pool of addresses. In DHCP, the
client and the server exchange mainly 4 DHCP messages in order to make a connection,
also called DORA process, but there are 8 DHCP messages in the process. These messages
are given as below:
1) DHCP discover message –
This is a first message generated in the communication process between server and
client. This message is generated by Client host in order to discover if there is any
DHCP server/servers are present in a network or not. This message is broadcasted to all
devices present in a network to find the DHCP server. This message is 342 or 576 bytes
long.

2) DHCP offer message –


The server will respond to host in this message specifying the unleased IP address and
other TCP configuration information. This message is broadcasted by server. Size of
message is 342 bytes. If there are more than one DHCP servers present in the network
then client host will accept the first DHCP OFFER message it receives. Also, a server ID
is specified in the packet in order to identify the server.

3) DHCP request message –


When a client receives a offer message, it responds by broadcasting a DHCP request
message. The client will produce a gratuitous ARP in order to find if there is any other
host present in the network with same IP address. If there is no reply by other host, then
there is no host with same TCP configuration in the network and the message is
broadcasted to server showing the acceptance of IP address. A Client ID is also added in
this message.

4) DHCP acknowledgement message –


In response to the request message received, the server will make an entry with
specified client ID and bind the IP address offered with lease time. Now, the client
will have the IP address provided by server.

5) DHCP negative acknowledgement message –


Whenever a DHCP server receives a request for IP address that is invalid according to the
scopes that is configured with, it send DHCP Nak message to client. Eg-when the server
has no IP address unused or the pool is empty, then this message is sent by the server to
client.

6) DHCP decline –
If DHCP client determines the offered configuration parameters are different or
invalid, it sends DHCP decline message to the server .When there is a reply to the
gratuitous ARP by any host to the client, the client sends DHCP decline message to the
server showing the offered IP address is already in use.

7) DHCP release –
A DHCP client sends DHCP release packet to server to release IP address and cancel
any remaining lease time.

8) DHCP inform –
If a client address has obtained IP address manually then the client uses a DHCP inform
to obtain other local configuration parameters, such as domain name. In reply to the
dhcp inform message, DHCP server generates DHCP ack message with local
configuration suitable for the client without allocating a new IP address. This DHCP
ack message is unicast to the client.

Note – All the messages can be unicast also by dhcp relay agent if the server is
present in different network.

Advantages –

The advantages of using DHCP include:

• centralized management of IP addresses


• ease of adding new clients to a network
• reuse of IP addresses reducing the total number of IP addresses that are
required
• simple reconfiguration of the IP address space on the DHCP server without
needing to reconfigure each client
The DHCP protocol gives the network administrator a method to configure the
network from a centralised area.
With the help of DHCP, easy handling of new users and reuse of IP address can be
achieved

Disadvantages –

Disadvantage of using DHCP is:

• IP conflict can occur

Discuss about the network congestion? Explain how different network parameters effect the
congestion
Network congestion in data networking and queueing theory is the reduced quality of service that
occurs when a network node or link is carrying more data than it can handle. Typical effects include
queueing delay, packet loss or the blocking of new connections.Congestion, in the context of
networks, refers to a network state where a node or link carries so much data that it may deteriorate
network service quality, resulting in queuing delay, frame or data packet loss and the blocking of
new connections. In a congested network, response time slows with reduced network throughput.
Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity.

Data packet loss from congestion is partially countered by aggressive network protocol
retransmission, which maintains a network congestion state after reducing the initial data load.
This can create two stable states under the same data traffic load - one dealing with the initial load
and the other maintaining reduced network throughput.

Effects of Congestion
 As delay increases, performance decreases.
 If delay increases, retransmission occurs, making situation worse.

Cause of Congestion
 Over-subscription
 Poor network design/mis-configuration
 Over-utilized devices
 Faulty devices
 Security attack

as 30 minutes. If the link alters state, the device detected the alteration generates and propagate an
update message regarding that link to all routers. Then each router takes a copy of the update
message and update its routing table and forwards the message to all neighbouring router.

This flooding of the update message is needed to ensure that all routers update their database before
creating an update routing table that reflects the new technology. OSPF protocol is the example
link state routing.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy