UNIT-4 Multimedia Communications
UNIT-4 Multimedia Communications
Multimedia Communications
1. Voluminous —they demand very high datarates, possibly dozens or hundreds of Mbps.
2. Real-time and interactive—they demand low delay and synchronization between audio
and video for “lipsync”. In addition, applications such as video conferencing and
interactive multimedia also require two-way traffic.
3. Sometimes bursty—datarates fluctuate drastically, e.g., no traffic most of the time but
burst to high volume in video on-demand.
Quality of Service (QoS) for multimedia data transmission depends on many parameters. Some
of the most important are:
1. Data Rate. A measure of transmission speed, often in kilobits per second (kbps) or
megabits per second (Mbps)
2. Latency (maximum frame/packet delay). Maximum time needed from transmission to
reception, often measured in milliseconds (msec). In voice communication, for example,
when the round - trip delay exceeds 50 msec, echo becomes a noticeable problem; when
the one - way delay is longer than 250 msec, talker overlap will occur, since each caller
will talk without knowing the other is also talking.
3. Packet loss or error. A measure (in percentage) of error rate of the packetized data
transmission. Packets get lost or garbled, such as over the Internet. They may also be
delivered late or in the wrong order. Since retransmission is often undesirable, a simple
error - recovery method for real - time multimedia is to replay the last packet, hoping the
error is not noticeable.
FIGURE 4.1: Jitters in frame playback: (a) high jitter; (b) low jitter
In general, for uncompressed audio / video, the desirable packet loss is < 10 - 2 (lose every
hundredth packet, on average). When it approaches 10%, it becomes intolerable. For compressed
multimedia and ordinary data, the desirable packet loss is less than 10 - 7 to 10 - 8.
4. Jitter (or delay jitter). A measure of smoothness of the audio / video playback.
Technically, jitter is related to the variance of frame / packet delays. A large buffer (jitter
buffer) can to hold enough frames to allow the frame with the longest delay to arrive, to
reduce playback jitter. However, this increases the latency and may not be desirable in
real - time and interactive applications. The above figure illustrates examples of high and
low jitters in frame playbacks.
Multimedia Service Classes Based on the above measures, multimedia applications can be
classified into the following types:
1. Real - Time (also Conversational). Two - way traffic, low latency and jitter, possibly with
prioritized delivery, such as voice telephony and video telephony
2. Priority data. Two - way traffic, low loss and low latency, with prioritized delivery, such as
e-commerce applications
3. Silver. Moderate latency and jitter, strict ordering and sync. One - way traffic, such as
streaming video; or two - way traffic (also Interactive), such as web surfing and internet
games
4. Best Effort (also Background). No real - time requirement, such as downloading or
transferring large files (movies)
Although QoS is commonly measured by the above technical parameters, QoS itself is a
"collective effect of service performances that determine the degree of satisfaction of the user of
that service," as defined by the International Telecommunications Union. In other words, it has
everything to do with how the user perceives it.
In real - time multimedia, regularity is more important than latency (i.e., jitter and quality
fluctuation are more annoying than slightly longer waiting); temporal correctness is more
important than the sound and picture quality (i.e., ordering and synchronization of audio and
video are of primary importance); and humans tend to focus on one subject at a time.
User focus is usually at the center of the screen, and it takes time to refocus, especially after a
scene change. Together with the perceptual nonuniformity we have studied in previous chapters,
many issues of perception can be exploited in achieving the best perceived QoS in networked
multimedia.
Table 4.2 Tolerance of latency and jitter in digital audio and video
2. Differentiated Service (DiffSety) uses DiffServ code [Type of Service (TOS) octet in IPv4
packet and Traffic Class octet in IPv6 packet] to classify packets to enable their differentiated
treatment,
• widely deployed in intra domain networks and enterprise networks, as it is simpler
and scales well, it is also applicable to end - to - end networks
• DiffServ, in conjunction with other QoS techniques, is emerging as the de facto QoS
technology.
3. Multiple Protocol Label Switching (MPLS) facilitates the marriage of IP to OSI layer 2
technologies, such as ATM, by overlaying a protocol on top of IP. It introduces a 32 - bit
label and inserts one or more shim labels into the header of an IP packet in a backbone IP
network. It thus creates tunnels, called Label Switched Paths (LSP). By doing so, the
backbone IP network becomes connection - oriented.
• Creates tunnels: Label Switched Paths (LSP) —IP network becomes connection-oriented.
• Main advantages of MPLS:
a. Support Traffic Engineering (TE), which is used essentially to control traffic flow.
b. Support VPN (Virtual Private Network).
c. Both TE and VPN help delivery of QoS for multimedia data.
4. DiffServ and MPLS can be used together to allow better control of both QoS performance
per class and provision of bandwidth, retaining advantages of both MPLS and DiffServ.
Used to alleviate the perceived deterioration (high packet loss or error rate) in network
congestion.
A broadcast message is sent to all nodes in the domain, a unicast message is sent to only one
node, and a multicast message is sent to a set of specified nodes.
• IP – Multicast
- Anonymous membership: the source host multicasts to one of the IP-multicast
addresses—doesn’t know who will receive.
- Potential problem: too many packets will be traveling and a live in the network—use
time-to-live (TTL) in each IP packet.
- One of the first trials of IP - multicast was in March 1992, when the Internet Engineering
Task Force (IETF) meeting in San Diego was broadcast (audio only) on the Internet.
- Too many packets will be traveling and alive in the network. Fortunately, IP packets have
a time - to - live (TTL) field that limits the packet's lifetime. Each router decrements the
TTL of the pass - by packet by at least one. The packet is discarded when its TTL is zero.
The IP - multicast method described above is based on UDP (not TCP), so as to avoid excessive
acknowledgments from multiple receivers for every message. As a result, packets are delivered
by "best effort", so reliability is limited.
MB one maintains a flat virtual topology and does not provide good route aggregation (at the
peak time, MBone had approximately 10,000 routes). Hence, it is not scalable. Moreover, the
original design is highly distributed (and simplistic). It assumes no central management, which
results in ineffective tunnel management, that is, tunnels connecting islands are not optimally
allocated. Sometimes multiple tunnels are created over a single physical link, causing congestion.
The original Internet design provided "best - effort" service and was adequate for applications
such as e - mail and FTP. However, it is not suitable for real - time multimedia applications.
1. Designed for the transport of real-time data such as audio and video streams:
- Primarily intended for multicast.
- Used in nv (network video) for MBone, Netscape Live Media, Microsoft Net meeting,
and Intel Videophone.
2. Usually runs on top of UDP which provides efficient (but less reliable) connectionless
datagram service :
- RTP must create its own timestamping and sequencing mechanisms to ensure the
ordering.
Since "UDP will not guarantee that the data packets arrive in the original order (not to mention
synchronization of multiple sources), RTP must create its own time stamping and sequencing
mechanisms to ensure the ordering.
RTP introduces the following additional parameters in the header of each packet:
1. Payload type indicates the media data type as well as its encoding scheme (e.g., PCM,
H.261 / H.263, MPEG 1, 2, and 4 audio / video, etc.) so the receiver knows how to decode
it.
2. Timestamp is the most important mechanism of RTP. The timestamp records the instant
when the first octet of the packet is sampled; it is set by the sender. With the timestamps,
the receiver can play the audio / video in proper timing order and synchronize multiple
streams (e.g., audio and video) when necessary.
3. Sequence number is to complement the function of timestamping. It is incremented by
one for each RTP data packet sent, to ensure that the packets can be reconstructed in order
by the receiver. This becomes necessary, for example, when all packets of a video frame
sometimes receive the same timestamp, and timestamping alone becomes insufficient.
4. Synchronization source (SSRC) ID identifies sources of multimedia data (e.g., audio,
video). If the data come from the same source (translator, mixer), they will be given the
same SSRC ID, so as to be synchronized.
5. Contributing Source (CSRC) ID identifies the source of contributors, such as all speakers
in an audio conference.
The following figure 4.3 shows the RTP header format. The first 12 octets are of fixed format,
followed by optional (0 or more) 32 - bit Contributing Source (CSRC) IDs.
Bits 0 and 1 are for the version of RTP, bit 2 (P) for signaling a padded payload, bit 3 (X) for
signaling an extension to the header, and bits 4 through 7 for a 4 - bit CSRC count that indicates
the number of CSRC IDs following the fixed part of the header.
Bit 8 (M) signals the first packet in an audio frame or last packet in a video frame, since an audio
frame can "be played out as soon as the first packet is received, whereas a video frame can be
rendered only after the last packet is received. Bits 9 through 15 describe the payload type, Bits
16 through 31 are for sequence number, and followed by a 32 - bit timestamp and a 32 - bit
Synchronization Source (SSRC) ID.
FIGURE 4.3: RTP packet header
RTCP gathers statistics for a media connection and information such as transmitted octet and
packet counts, lost packet counts, jitter, and round - trip delay time. An application may use this
information to control quality of service parameters, perhaps by limiting flow, or using a different
codec.
1. Receiver report (RR) provides quality feedback (number of last packet received, number
of lost packets, jitter, and timestamps for calculating round - trip delays).
2. Sender report (SR) provides information about the reception of RR, number of packets /
bytes sent, and so on.
3. Source description (SDES) provides information about the source (e - mail address,
phone number, full name of the participant).Bye indicates the end of participation.
4. Application specific functions (APP) provides for future extension of new features. RTP
and RTCP packets are sent to the same IP address (multicast or unicast) but on different
ports.
1. Developed to guarantee desirable QoS, mostly for multicast although also applicable to
unicast.
(1)A Path message is initiated by the sender, and contains information about the sender and the
path (e.g., the previous RSVP hop).
The Resource Reservation Protocol (RSVP) is a Transport Layer protocol designed to reserve
resources across a network for an integrated services Internet. RSVP operates over an IPv4 or
IPv6 Internet Layer and provides receiver - initiated setup of resource reservations for multicast
or unicast data flows with scaling and robustness. It does not transport application data but is
similar to a control protocol, like ICMP or IGMP.
RSVP can be used by either hosts or routers to request or deliver specific levels of quality of
service (QoS) for application data streams or flows. RSVP defines how applications place
reservations and how they can relinquish the reserved resources once the need for them has
ended. RSVP operation will generally result in resources being reserved in each node along a
path.
RSVP is not a routing protocol and was designed to interoperate with current and future routing
protocols. RSVP by itself is rarely deployed in telecommunication networks today but the traffic
engineering extension of RSVP, or RSVPTE, is becoming more widely accepted nowadays in
many QoS - oriented networks. Next Steps in Signaling (NSIS) is a replacement for RSVP.
Main Challenges of RSVP
(a)There can be a large number of senders and receivers competing for the limited network
bandwidth.
(b)The receivers can be heterogeneous in demanding different contents with different QoS.
FIGURE 4.4: A scenario of network resource reservation with RS VP: (a) senders S1 and S2
send out their PATH messages to receivers Rl, R2, and R3; (b) receiver Rl sends out RESV
message to SI; (c) receiver R2 sends out RESV message to S2; (d) receivers R2 and R3 send
out their RESV messages to SI
The most important messages of RSVP are Path and Resv. A Path message is initiated by the
sender and travels towards the multicast (or unicast) destination addresses. It contains
information about the sender and the path (e.g., the previous RSVP hop), so the receiver can find
the reverse path to the sender for resource reservation. A Resv message is sent by a receiver that
wishes to make a reservation.
RSVP is receiver - initiated. A receiver (at a leaf of the multicast spanning tree) initiates the
reservation request Resv, and the request travels back toward the sender but not necessarily all
the way. A reservation will be merged with an existing reservation made by other receiver(s) for
the same session as soon as they meet at a router. The merged reservation will accommodate the
highest bandwidth requirement among all merged requests. The user - initiated scheme is highly
scalable, and it meets users' heterogeneous needs.
RSVP creates only soft state. The receiver host must maintain the soft state by periodically
sending the same Resv message; otherwise, the state will time out. There is no distinction
between the initial message and any subsequent refresh message. If there is any change in
reservation, the state will automatically be updated according to the new reservation parameters
in the refreshing message. Hence, the RSVP scheme is highly dynamic.
Fig 4.4 depicts a simple network with 2 senders (S1,S2), three receivers (R1,R2,andR3) and 4
routers (A,B,C,D):
1. In (a), Path messages are sent by bothS1 and S2 along their paths to R1,R2, and R3.
2. In (b) and (c), R1 and R2 send out Resv messages to S1 and S2 respectively to make
reservations for S1 and S2 resources. Note that from C to A, two separate channels must be
reserved since R1 and R2 requested different data streams.
3. In (d), R2 and R3 send out their Resv messages to S1 to make additional requests. R3’s request
was merged with R1’s previous request at A and R2’s was merged withR1’s at C. Any possible
variation of QoS that demands higher bandwidth can be dealt with by modifying the reservation
state parameters.
The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in
entertainment and communications systems to control streaming media servers. The protocol is
used for establishing and controlling media sessions between end points. Clients of media servers
issue VCR - like commands, such as play and pause, to facilitate real - time control of playback
of media files from the server.
• RTSP Protocol: for communication between a client and a stored media server(Fig 4.5).
1. Requesting presentation description: the client issues a DESCRIBE request to the Stored
Media Server to obtain the presentation description—media types, frame rate, resolution, codec,
etc.
2. Session setup: the client issues a SETUP to inform the server of the destination IP address,
port number, protocols, TTL (for multicast).
3. Requestingandreceivingmedia:afterreceivingaPLAY,theserver
startedtotransmitstreamingaudio/videodatausingRTP.
4. Sessionclosure:TEARDOWNclosesthesession.
The transmission of streaming data itself is not a task of the RTSP protocol. Most RTSP servers
use the Real - time Transport Protocol (RTP) in conjunction with Real - time Control Protocol
(RTCP) for media stream delivery, however some vendors implement proprietary transport
protocols. The RTSP server from Real Networks, for example, also features Real Networks'
proprietary Real Data Transport (RDT).
RTSP was developed by the Multiparty Multimedia Session Control Working Group (MMUSIC
WG) of the Internet Engineering Task Force (IETF) and published as RFC 2326 in 1998. RTSP
using RTP and RTCP allows for the implementation of rate adaption.
Streaming Audio and Video. In the early days, multimedia data was transmitted over the network
(often with slow links) as a whole large file, which would be saved to a disk, then played back.
Nowadays, more and more audio and video data is transmitted from a stored media server to the
client in a datastream that is almost instantly decoded - streaming audio and streaming video.
Usually, the receiver will set aside buffer space to prefetch the incoming stream. As soon as the
buffer is filled to a certain extent, the (usually) compressed data will be uncompressed and played
back. Apparently, the buffer space needs to be sufficiently large to deal with the possible jitter
and to produce continuous, smooth playback. On the other hand, too large a buffer will introduce
unnecessary initial delay, which is especially undesirable for interactive applications such as
audio - or videoconferencing.
2. Session setup. The client issues a SETUP to inform the server of the destination IP
address, port number, protocols, and TTL (for multicast). The session is set up when the
server returns a session ID.
3. Requesting and receiving media. After receiving a PLAY, the server starts to transmit
streaming audio/video data, using RTP. It is followed by a RECORD or PAUSE. Other
VCR commands, such as FAST - FORWARD and REWIND are also supported. During
the session, the client periodically sends an RTCP packet to the server, to provide
feedback information about the QoS received.
4. Session closure. TEARDOWN closes the session.
The Public Switched Telephone Network (PSTN) relies on copper wires carrying analog voice
signals. It provides reliable and low - cost voice and facsimile services. In the eighties and
nineties, modems were a popular means of "data over voice networks". In fact, they were
predominant before the introduction of ADSL and cable modems.
As PCs and the Internet became readily available and more and more voice and data
communications became digital (e.g., in ISDN), "voice over data networks," especially Voice
over IP (VoIP) started to attract a great deal of interest in research and user communities. With
ever - increasing network bandwidth and the ever - improving quality of multimedia data
compression, Internet telephony has become a reality. Increasingly, it is not restricted to voice
(VoIP) — it is about integrated voice, video, and data services.
The main advantages of Internet telephony over POTS are the following:
• It uses packet switching, not circuit switching; hence, network usage is much more
efficient (voice communication is bursty and VBR - encoded).
• With the technologies of multicast or multipoint communication, multiparty calls are not
much more difficult than two - party calls.
• With advanced multimedia data - compression techniques, various degrees of QoS can
be supported and dynamically adjusted according to the network traffic, an improvement
over the "all or none" service in POTS.
• Good graphics user interfaces can be developed to show available features and services,
monitor call status and progress, and so on.
As the following figure shows, the transport of real - time audio (and video) in Internet telephony
is supported by RTP (whose control protocol is RTCP). Streaming media is handled by RTSP
and Internet resource reservation is taken care of by RSVP.
Internet telephony is not simply a streaming media service over the Internet, because it requires
a sophisticated signaling protocol. A streaming media server can be readily identified by a URI
(Universal Resource Identifier), whereas acceptance of a call via Internet telephony depends on
the callee's current location, capability, availability, and desire to communicate. The following
are brief descriptions of the H.323 standard and one of the most commonly used signaling
protocols, Session Initiation Protocol (SIP).
FIGURE 4.6: Network protocol structure for internet telephony
H.323. H.323 is a standard for packet - based multimedia communication services over networks
(LAN, Internet, wireless network, etc.) that do not provide a guaranteed QoS. It specifies
signaling protocols and describes terminals, multipoint control units (for conferencing), and
gateways for integrating Internet telephony with General Switched Telephone Network (GSTN)
data terminals.
• Call setup. The caller sends the gatekeeper (GK) a Registration, Admission and Status
(RAS) Admission Request (ARQ) message, which contains the name and phone number
of the callee. The GK may either grant permission or reject the request, with reasons such
as "security violation" and "insufficient bandwidth".
• Capability exchange. An H.245 control channel will be established, for which the first
step is to exchange capabilities of both the caller and callee, such as whether it is audio,
video, or data; compression and encryption, and so on.
H.323 provides mandatory support for audio and optional support for data and video. It is
associated with a family of related software standards that deal with call control and data
compression for Internet telephony. Following are some of the related standards:
• H.235. Security and encryption for H.323 and other H.245 - based multimedia terminals
Audio Codecs
• G.711. Codec for 3.1 kHz audio over 48, 56, or 64 kbps channels. G.711 describes Pulse
Code Modulation for normal telephony
• G.722. Codec for 7 kHz audio over 48, 56, or 64 kbps channels
• G.723.1. Codec for 3.1 kHz audio over 5.3 or 6.3 kbps channels. (The VoIP Forum
adopted G.723.1 as the codec for VoIP.)
• G.728. Codec for 3.1 kHz audio over 16 kbps channels
• G.729, G.729 a. Codec for 3.1 kHz audio over 8 kbps channels. (The Frame Relay Forum
adopted G.729 by as the codec for voice over frame relay.)
Video Codecs
• H.263. Codec for low - bitrate video (< 64 kbps) over the GSTN
Related Standards
Similar to HTTP, SIP is a text - based protocol that is different from H.323. It is also a client -
server protocol. A caller (the client) initiates a request, which a server processes and responds
to. There are three types of servers. A proxy server and a redirect sewer forward call requests.
The difference between the two is that the proxy server forwards the requests to the next - hop
server, whereas the redirect server returns the address of the next - hop server to the client, so as
to redirect the call toward the destination.
The third type is a location server, which finds current locations of users. Location servers
usually communicate with the redirect or proxy servers.They may use finger, rwhois,
Lightweight Directory Access Protocol (LDAP), or other multicast - based protocols to determine
a user's address.
SIP can advertise its session using e - mail, news groups, web pages or directories, or Session
Announcement Protocol (SAP) — a multicast protocol.
The above figure illustrates a possible scenario when a caller initiates a SIP session:
SIP can also use Session Description Protocol (SDP) to gather information about the callee's
media capabilities.
Session Description Protocol (SDP). As its name suggests, SDP describes multimedia sessions.
As in SIP, SDP descriptions are in textual form. They include the number and types of media
streams (audio, video, whiteboard session, etc.), destination address (unicast or multicast) for
each stream, sending and receiving port numbers, and media formats (payload types). When
initiating a call, the caller includes the SDP information in the INVITE message. The called party
responds and sometimes revises the SDP information, according to its capability.
Interactive TV (ITV) is a multimedia system based on the television sets in homes. It can support
a growing number of activities, such as
A new development in Digital Video Broadcasting (DVB) is Multimedia Home Platform (DVB
- MHP) which supports all the activities above as well as electronic program guide (EPG) for
television.
The fundamental differences between ITV and conventional cable TV are first, that ITV invites
user interactions; hence the need for two - way traffic - downstream (content provider to user)
and upstream (user to content provider). Second, ITV is rich in information and multimedia
content.
To perform the above functions, a Set - top Box (STB) is required, which generally has the
following components:
Network interface and communication unit ,including tuner and demodulator (to extract the
digital stream from analog channel), security devices, and a communication channel for basic
navigation of WWW and digital libraries as well as services and maintenance
1. Processing unit, including CPU, memory, and a special - purpose operating system for
the STB
2. Audio / video unit, including audio and video (MPEG - 2 and 4) decoders, Digital Signal
Processor (DSP), buffers, and D / A converters
3. Graphics unit,supporting real - time 3D graphics for animation and games
4. Peripheral control unit, controllers for disks, audio and video I / O devices (e.g., digital
video cameras), CD / DVD reader and writer, and so on
Among all possible Media - on - Demand services, the most popular is likely to be subscription
to movies: over high - speed networks, customers can specify the movies they want and the time
they want to view them. The statistics of such services suggest that most of the demand is usually
concentrated on a few (10 to 20) popular movies (e.g., new releases and top - ten movies of the
season). This makes it possible to multicast or broadcast these movies, since a number of clients
can be put into the next group following their request.
An important quality measure of such MOD service is the waiting time (latency). Given the
potentially extremely high bandwidth of fiber - optic networks, it is conceivable that the entire
movie can be fed to the client in a relatively short time if it has access to some high - speed
network. The problem with this approach is the need for an unnecessarily large storage space at
the client side.
7. The access time for Pyramid broadcasting is determined by the size of S1.Bydefault,we set
α = B M·K to yield the shortest access time.
8. The access time drops exponentially with the increase in the total bandwidth B, because α
can be increased linearly.
3. As shown in Fig4.12, two clients who made a request at time intervals (1,2) and (16,17),
respectively, have the irrespective transmission schedules. At any given moment, no more
than two segments need to be received.
1. Adopts a different strategy in which the size of all segments remains constant whereas
the bandwidth of channel i is Bi = b/I, where b is the movie’s play back rate.
2. The total bandwidth allocated for delivering the movie is thus
where K is the total number of segments, and HK = PK i=1 1 I is the Harmonic number of
K.
FIGURE 4.11: Harmonic Broadcasting.
1. As Fig.16.14shows: after requesting the movie, the client will be allowed to download and
play the first occurrence of segment S1 from Channel1. Meanwhile, it will download all
other segments from their respective channels.
2. The advantage of Harmonic broad casting is that the Harmonic number grows slowly with
K.
3. For example, when K =30, HK ≈ 4.Hence, the demand on the total bandwidth (inthiscase4·b)
is modest.
4. It also yields small segments—only 4minutes (120/30) each in length. Hence, the access
time for Harmonic broadcasting is generally shorter than pyramid broadcasting.
Pagoda Broadcasting
1. Harmonic broadcasting uses a large number of low-bandwidth streams, while Pyramid
broadcasting schemes use a small number of high-bandwidth streams.
2. Harmonic broadcasting generally requires less bandwidths than Pyramid broadcasting.
However, it is hard to manage a large number of independent datastreams using Harmonic
broadcasting.
3. Paris, Carter, and Long presented Pagoda Broadcasting, a frequency broadcasting scheme
that tries to combine the advantages of Harmonic and Pyramid schemes.
5. As shown in Fig.16.16, the “first stream” B starts at time t =2. The solid line indicates
the playback rate, and the dashed line indicates the receiving bandwidth which is twice
of the playback rate. The client is allowed to prefetch from an earlier (“second”) stream
A which was launched at t =0. At t =4,the stream B joins A.
6. A variation of Stream merging is Piggybacking, in which the playback rate of the streams
are slightly and dynamically adjustedsoastoenablemerging(piggybacking)ofthestreams.
• TocopewiththeVBRandnetworkloadfluctuation,buffers
areusuallyemployedatbothsenderandreceiverends:
– A PrefetchBuffer isintroducedattheclientside.Ifthesizeofframe t is d(t),thebuffersizeis
B,andthenumberofdatabytesreceived sofar(atplaytimeforframe t)isA( t ),thenforall t ∈
1,2,...,N,itisrequiredthat
t X i=1
d(i) ≤ A(t) ≤
t−1 X i=1
d(i)+B. (16.9)
– When A(t) < Pt i=1 d(i),wehaveinadequatenetworkthroughput, andhencebuffer underflow (or
starvation),whereaswhen A(t) > Pt−1 i=1 d(i)+B,wehaveexcessivenetworkthroughputandbuffer
overflow. –
Bothareharmfultosmoothandcontinuousplayback.Inbufferunderflowthereisnoavailabledatatopla
y,andinbufferoverflowmedia packetsmustbedropped.