White Paper - : Qos Protocols & Architectures
White Paper - : Qos Protocols & Architectures
Stardust.com, Inc.
1901 S. Bascom Ave, Suite 333
Campbell, California 95008
Phone: 408-879-8080
Fax: 408-879-8081
www.stardust.com
www.qosforum.com
White Paper -
QoS protocols &
architectures
Quality of Service protocols use a variety of complementary
mechanisms to enable deterministic end-to-end data delivery
Introduction............................................................................................................................................3
DiffServ – Prioritization.......................................................................................................................10
QoS architectures................................................................................................................................16
Policy-enabled QoS.............................................................................................................................21
Conclusion...........................................................................................................................................21
References............................................................................................................................................22
Copyright © 1999 Stardust.com, Inc. All Rights Reserved. The text of this publication, or any part thereof, may not be reproduced or transmitted in
any form or by any means, electronic or mechanical, including photocopying, recording, storage in an information retrieval system, or otherwise,
without prior written permission of Stardust.com, Inc.
Stardust is a registered trademark and Stardust.com is trademark of Stardust Technologies, Inc. iBAND and “more sandals than suits” are service
marks of Stardust Forums, Inc.
Stardust.com, Inc. does not itself distribute, ship or sell nor permit others to distribute, ship or sell its copyrighted materials to individuals or businesses
in countries that are not members of the Berne Convention or the Universal Copyright Convention.
This purpose of this paper is to provide an introduction to and overview of the Quality
of Service (QoS) protocols now available or under development for Internet Protocol
(IP) based networks. After a brief introduction to the topic, we provide a high-level
description of how each QoS protocol operates. We consider the many architectures
in which the protocols work together along with policy management to provide end-
to-end QoS for IP application traffic, and end by briefly describing the state of QoS
support for IP multicast and explicit policy support.
Introduction
Standard Internet Protocol (IP)-based networks provide “best effort” data delivery by
default. Best-effort IP allows the complexity to stay in the end-hosts, so the network
can remain relatively simple [e2e]. This scales well, as evidenced by the ability of the
Internet to support its phenomenal growth. As more hosts are connected, network
service demands eventually exceed capacity, but service is not denied. Instead it
degrades gracefully. Although the resulting variability in delivery delays (jitter) and
packet loss do not adversely affect typical Internet applications--email, file transfer and
Web applications— other applications cannot adapt to inconsistent service levels.
Delivery delays cause problems for applications with real-time requirements, such as
those that deliver multimedia, the most demanding of which are two-way applications
like telephony.
A number of QoS protocols have evolved to satisfy the variety of application needs.
We describe these protocols individually, then describe how they fit together in various
architectures with the end-to-end principle in mind. The challenge of these IP QoS
technologies is to provide differentiated delivery services for individual flows or
aggregates without breaking the Net in the process. Adding “smarts” to the Net and
improving on “best effort” service represents a fundamental change to the design that
made the Internet such a success. The prospect of such a potentially drastic change
makes many of the Internet’s architects very nervous.
To avoid these potential problems as QoS protocols are applied to the Net, the end-to-
end principle is still the primary focus of QoS architects. As a result, the fundamental
principle of “Leave complexity at the ‘edges’and keep the network ‘core’simple” is a
central theme among QoS architecture designs. This is not as much a focus for
individual QoS protocols, but in how they are used together to enable end-to-end
QoS. We explore these architectures later in this paper after we give a brief overview
of each of the key QoS protocols.
There is more than one way to characterize Quality of Service (QoS). Generally
speaking, QoS is the ability of a network element (e.g. an application, a host or a
router) to provide some level of assurance for consistent network data delivery.
Some applications are more stringent about their QoS requirements than others, and
for this reason (among others) we have two basic types of QoS available:
Applications, network topology and policy dictate which type of QoS is most
appropriate for individual flows or aggregates. To accommodate the need for these
different types of QoS, there are a number of different QoS protocols and algorithms:
Table 1. Shows the different bandwidth management algorithms and protocols, their relative QoS
levels, and whether they are activated by network elements (Net) or applications (App), or both.
Table 1 compares the QoS protocols in terms of the level of QoS they provide and
where the service and control are implemented -- in the Application (App) or in the
Network (Net). Notice that this table also refers to router queue management
algorithms such as Fair Queuing (FQ), Random Early Drops (RED). Queue
management— including the number of queues and their depth, as well as the
algorithms used to manage them--is very important to QoS implementations. We refer
to them here only to illustrate a full spectrum of QoS capabilities, but as they are
largely transparent to applications and not explicitly QoS algorithms, we will not refer
to them again. For more information see [Queuing].
The QoS protocols we are focused on in this paper vary, but they are not mutually
exclusive of one another. On the contrary, they complement each other nicely. There
is a variety of architectures in which these protocols work together to provide end-to-
end QoS across multiple service providers. We will now describe each of these
protocols in some more detail -- describing their essential mechanics and functionality
-- and follow that with a description of the various architectures in which they can be
used together to provide end-to-end QoS.
• When each RSVP router along the upstream path receives the RESV
message, it uses the admission control process to authenticate the
request and allocate the necessary resources. If the request cannot be
satisfied (due to lack of resources or authorization failure), the router
returns an error back to the receiver. If accepted, the router sends the
RESV upstream to the next router.
• When the last router receives the RESV and accepts the request, it
sends a confirmation message back to the receiver (note: the “last
router” is either closest to the sender or at a reservation merge point
for multicast flows).
RSVP End-to-End
PATH Message follows the "downstream" data RESV Message goes "upstream" following the
route to receiver(s). Each RSVP-enabled Source Route provided in PATH message.
router installs PATH state and forwards PATH PAT Each RSVP-enabled router makes the
message to next hop on route to receiver(s) TH H allocation and forwards PATH message, or
PA
rejects it and returns an error back to receiver
SV RES
V
PATH message from sender RE
contains the Traffic Specification
PAT
TH
H
to be sent
SV
SV
ge
sa
es
m PA
TH TH
PA
RE
SV
SV me
RE ss
ag
e
PATH and RESV messages are RESV message contains resource reservation
Sender Receiver
passed through non-RSVP routers request, which contains TSpec from sender,
transparently, although these routers RSpec with QoS level (controlled or
are weak links in the chain of resource guaranteed), and "Filter Spec" (transport and
reservations. port) for "Flow-Descriptor"
Figure 1: RSVP "PATH" and "RESV" messages are used to establish a resource reservation between
a sender and receiver. There is an explicit tear-down of reservations also (not shown).
RSVP enables Integrated Services, of which there are two fundamentally different
types:
Data flows for an RSVP session are characterized by senders in the TSpec (traffic
specification) contained in PATH messages, and mirrored in the RSpec (reservation
specification) sent by receivers in RESV messages. The token-bucket parameters—
bucket rate, bucket depth, and peak rate--are part of the TSpec and RSpec. Here is a
complete list of the parameter descriptions [RSVP IntServ, IntServ Parameters,
IntServ Controlled]. For both Guaranteed and Controlled Load service, non-
conforming (out-of-spec) traffic is treated like non-QoS best-effort traffic:
In our description of the traffic and reservation specifications, we have omitted details
about other RSVP and Integrated Service features such as:
2) Reservation styles, which deal with how one reservation interacts with
others.
As mentioned already, RSVP provides the highest level of IP QoS available. It allows
an application to request QoS with a high level of granularity and with the best
guarantees of service delivery possible. This sounds wonderful and leaves one
wondering why we need anything else. The reason is that it comes at the price of
complexity and overhead, thus is overkill for many applications and (as we describe
later) for some portions of the network. Simpler, less fine-tuned methods are needed,
and that is what DiffServ provides, as we describe now.
DiffServ – Prioritization
Classifier Conditioner
Marker Meter
Conditioning essentially involves
There are two types of applying the PHB. Behaviors
classifiers: mayinclude marking or metering,
- Behavior Aggregate (BA): Uses but also queue selection and
only the DSCP value treatment, policing (shaping
- Multi-Field (MF): Uses other traffic by adding delay or
Markers are used to:
header info (like src address, Metering simply accumulates dropping packets in order to
- Add DSCP when none exists
protocol, or port numbers, etc.) statistics, most likely in an conform to the traffic profile
- Add DSCP as mapped from
SNMP MIB. A DiffServ MIB is described in the SLA with
RSVP reservation
For BA, the DSCP is essentially not yet defined, and there is destination or source (depending
- Change to Map from DSCP
an index into the Per-Hop some question about the whether this is an egress or
to IP TOS, or back
Behavior(PHB) table. Policy granularity it will provide (i.e. ingress point),
- Change DSCP as local
dictates how the PHB is metrics for every PHB??) Could also authenticate the
policy dictates
configured for each DSCP. traffic for admission control.
Figure 2: Differentiated Services Architecture, with a break out of some specifics. This functionality is
enabled in every DiffServ enabled router, although not all functions are used all the time. Typically,
border routers--at ingress and egress points--apply functions, but interior routers may also.
DiffServ assumes the existence of a service level agreement (SLA) between networks
that share a border. The SLA establishes the policy criteria, and defines the traffic
profile. It is expected that traffic will be policed and smoothed at egress points
according to the SLA, and any traffic “out of profile” (i.e. above the upper-bounds of
bandwidth usage stated in the SLA) at an ingress point have no guarantees (or may
incur extra costs, according to the SLA). The policy criteria used can include time of
day, source and destination addresses, transport, and/or port numbers (i.e. application
Ids). Basically, any context or traffic content (including headers or data) can be used
to apply policy.
bits: 0 1 2 3 4 5 6 7 bits: 0 1 2 3 4 5 6 7
Figure 3: Differentiated Services Code Points (DSCP) redefine the IPv4 Type of Service byte. IP
Precedence bits are preserved in class selector codepoints & PHBs, but TOS values are not.
When applied, the protocol mechanism that the service uses are bit patterns in the
“DS-byte,” which for IPv4 is Type-of-Service (TOS) octet and for IPv6 is the Traffic
Class octet. As illustrated in Figure 3, although the DS field uses the IPv4 TOS byte
[DiffServ Field], as defined in RFC 791 [IP], it does not preserve the original IPv4
TOS bit values as defined by RFC 1349 [TOS]. The IP Precedence bits (0-2) are
preserved, however. And although it is possible to assign any PHB to the codepoints
in this range, the (required) default PHBs are equivalent to IP Precedence service
descriptions, as described in detail in RFC 1812 [RouterReqs].
The simplicity of DiffServ to prioritize traffic belies its flexibility and power. When
DiffServ uses RSVP parameters or specific application types to identify and classify
constant-bit-rate (CBR) traffic, it will be possible to establish well-defined aggregate
flows that may be directed to fixed bandwidth pipes. As a result, you could share
resources efficiently and still provide guaranteed service. We will describe this type of
usage later as we describe the various QoS architectures possible.
MPLS is more of a “traffic engineering” protocol than a QoS protocol, per se. MPLS
routing is used to establish “fixed bandwidth pipes” analogous to ATM or Frame
Relay virtual circuits. The difference is arguable since the end-result is service
improvement and increased service diversity with more flexible, policy-based network
management control, all of which the other QoS protocols also provide.
• At the first hop router in the MPLS network, the router makes a
forwarding decision based on the destination address (or any other
information in the header, as determined by local policy) then
determines the appropriate label value -- which identifies the
Forwarding Equivalence Class (FEC) -- attaches the label to the
packet and forwards it to the next hop.
• At the next hop, the router uses the label value as an index into a table
that specifies the next hop and a new label. The LSR attaches the new
label, then forwards the packet to the next hop.
The route taken by an MPLS-labeled packet is called the Label Switched Path
(LSP). The idea behind MPLS is that by using a label to determine the next hop,
routers have less work to do and can act more like simple switches. The label
represents the route and by using policy to assign the label, network managers have
more control for more precise traffic engineering.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
20-bits: Label value used by LSR to lookup 3-bits: Reserved 1-bit: "Bottom 8-bits: TTL
either next-hop, operation to perform, or for Experimental of Label decremented
outgoing data-link encapsulation Use Stack" Flag by each LSR
Label processing is actually a bit more involved than described above, since labels can
be “stacked” (to allow MPLS “routes within routes”), and labeled packets have a time-
to-live value (TTL), as shown in Figure 4. The TTL works essentially the same way
TTL in an IP header works: each router hop decrements the value by one until it hits
zero. The difference is that when an MPLS TTL reaches zero, the action is label
dependent (so unlike with IP, the datagram may not be discarded and an ICMP “TTL
Exceeded” message may not be generated). Nonetheless, label processing is the
relatively simple aspect of MPLS.
A more complex aspect of MPLS involves the distribution and management of labels
among MPLS routers, to ensure they agree on the meaning of various labels. The
Label Distribution Protocol (LDP) [MPLS LDP] is specifically designed for this
purpose, but it is not the only possibility. There are proposals to use RSVP [MPLS
LSPS], BGP [MPLS BGP], and PIM [MPLS PIM] possibly “piggy-backing” label
management information, so the use of more than one protocol for label distribution is
expected.
Although infrastructure details such as label distribution are important to mention, for
most network managers they will be transparent. More relevant to MPLS for most
network managers is the policy management that determines which labels to use
where, and not how the labels are actually distributed.
QoS assurances are only as good as their weakest link. The QoS “chain” is end-to-end
between sender and receiver, which means every router along the route must have
support for the QoS technology in use, as we have described with the previous QoS
protocols. The QoS “chain” from top-to-bottom is also an important consideration,
however, in two aspects:
• Sender and receiver hosts must enable QoS so applications can enable
it explicitly or the system can enable it implicitly on behalf of the
applications. Each OSI layer from the application down must also
support QoS to assure that high-priority send and receive requests
receive high priority treatment from the host’s network system.
qosprot_v3.doc Page 13 July 8, 1999
© 1999 Copyright Stardust.com, Inc. All Rights Reserved. Stardust is a registered trademark and Stardust.com is a trademark of Stardust Technologies,
Inc. iBAND and “more sandals than suits” are service marks of Stardust Forums, Inc.
QoS protocols & architectures QoS Forum White Paper
The IEEE 802.1p, 802.1Q and 802.1D standards define how Ethernet switches can
classify frames in order to expedite delivery of time-critical traffic. The Internet
Engineering Task Force [IETF] Integrated Services over Specific Link Layers
[ISSLL] Working Group is chartered to define the mapping between upper-layer QoS
protocols and services with those of Layer 2 technologies, like Ethernet. Among other
things, this has resulted in the development of the “Subnet Bandwidth Manager”
(SBM) for shared or switched 802 LANs such as Ethernet (also FDDI, Token Ring,
etc.). SBM is a signaling protocol [SBM] that allows communication and
coordination between network nodes and switches in the [SBM Framework] and
enables mapping to higher-layer QoS protocols [SBM Mapping].
A fundamental requirement in the SBM framework is that all traffic must pass through
at least one SBM-enabled switch. As shown in Figure 5, aside from the QoS-enabled
application and Layer 2 (e.g., Ethernet), the primary (logical) components of the SBM
system are:
there can be more segment per subnet). The DSBM may be statically configured or
“elected” among the other Bas [SBM].
Centralized
QoS Application (QApp) QApp BA
Requestor Module (RM) RM Architecture
1 DSBM Client (any RSVP-capable host or router) looks for the DSBM
on the segment attached to each interface (done by monitoring the
“AllSBMAddress,” the reserved IP Multicast address 224.0.0.17).
4 When sending an RSVP RESV message, a host sends it to the first hop
(as always), which would be the DSBM(s) in this case (taken from the
PATH message).
This sketch looks very much like standard RSVP processing in a router, however we
omitted some significant details for the sake of simplicity. We will not attempt more
detail here, but want to mention the TCLASS object that either a sender or any DSBM
can add to a RSVP PATH or RESV message. It contains a preferred 802.1p priority
setting and allows overriding a default setting, although any DSBM may change the
value after receiving it. Routers must save the TCLASS in the PATH or RESV state,
and remove it from the message to avoid forwarding it on the outgoing interface, but
then they must put it back into incoming messages.
IEEE 802.1p uses a 3-bit value (part of an 802.1Q header) in which can represent an
8-level priority value. They are changeable and the specified bounds are only targets,
but the default service-to-value mappings defined in [SBM Mapping] are:
As with DiffServ, the simplicity of prioritization values belies the complexity that is
possible. As we describe next in the QoS Architectures section, the flexibility that
mapping provides allows for a wide variety of possibilities capable of supporting a
wide range of QoS assurances and granularity.
QoS architectures
With the exception of the RSVP mapping we did to illustrate 802 SBM, the examples
we’ve used in the descriptions of the QoS protocols described (RSVP, DiffServ and
MPLS), have all shown each protocol used independently from end-to-end between
sender and receiver. In real-world use, it is unlikely that these QoS protocols will be
used independently, and in fact they are designed for use with other QoS technologies
to provide top-to-bottom and end-to-end QoS between senders and receivers.
Host A Host B
Application Application
QoS-enabled
Presentation Presentation Application
Top-to-Bottom QoS
Session 5 Session QoS API
Transport 4 Transport
RSVP
Network 3 Network DiffServ
End-to-End QoS
Figure 6: "End-to-end" and "top-to-bottom" in the real world means enjoying heterogeneity,
and that includes QoS technologies, which were made to compliment each other end-to-end.
Most of the specifications for “gluing” these QoS pieces together are not standardized
as yet, but work is well underway to define the various architectures that are
possible— and necessary— to provide ubiquitous end-to-end QoS. In this section we
describe a number of these architectures, highlight the issues and describe how they
address them. Figure 6 provides a high-level view of how the pieces fit together, and
Figure 7 provides another more detailed view of much the same idea. We reference
both of these illustrations as we describe how the various protocols work together in
concert to provide end-to-end and top-to-bottom QoS.
RSVP provisions resources for network traffic, whereas DiffServ simply marks and
prioritizes traffic. RSVP is more complex and demanding than DiffServ in terms of
router requirements, so can negatively impact backbone routers. This is why the “best
common practice” says to limit RSVP’s use on the backbone [RSVP Applicability],
and why DiffServ can exist there.
The architecture represented in Figure 7— RSVP at the “edges” of the network, and
DiffServ in the “core”— has momentum and support. Work within the IETF DiffServ
work group is progressing quickly, although initial tests have shown mixed results.
DiffServ "Signalled"
QoS
RSVP-enabled QoS (network "core") RSVP-enabled QoS
(network "edge") (network "edge")
Ingress point may map DiffServ MPLS may use RSVP Egress point
Host A codepoint or RSVP to a specific route to provision bandwidth removes Host B
by "marking" with MPLS header for its "tunnel" MPLS prefix
End-to-End QoS
Figure 7: Illustrates the possible use of different QoS technologies under development--RSVP, DiffServ, MPLS,
COPS and Bandwidth Brokers"--working cooperatively in various stratagies to enable end-to-end QoS
mechanics for detecting how much bandwidth they need and then allocating the
necessary resources for dedicated usage. Only RSVP is designed to do that.
Hence, although RSVP was originally designed to allocate bandwidth for individual
application flows, it is very important for allocating bandwidth to accommodate the
needs of traffic aggregates as well [RSVP MPLS]. This need highlights the challenge,
however, for network engineers using DiffServ or MPLS to know the bandwidth
demands to anticipate, so they can make the appropriate resource reservation request.
Additionally, senders and receivers at both ends of the virtual pipes must make these
reservation requests so the appropriate PATH and RESV messages can be sent from
and to the appropriate unicast locations.
“A key problem in the design of RSVP version 1 is, as noted in its applicability
statement, that it lacks facilities for aggregation of individual reserved sessions into a
common class. The use of such aggregation is required for scalability” [RSVP
Aggregation]. So in addition to using RSVP to provision for QoS aggregates,
another consideration is using RSVP to provision for RSVP aggregates.
In either case, the effect is a significant simplification of RSVP support on the MPLS
routers. By referencing MPLS labels, LSRs need not manage RSVP state [MPLS
Architecture].
in the design of QoS protocols. Although allowances have always been made in the
initial designs of QoS protocols, full support of QoS for multicast is still not
standardized or fully understood yet. There are a number of issues involved with
multicast support that we describe here, as we summarize the current state of support
of QoS for multicast for each of the QoS protocols we’ve focused on in this paper.
Another aspect of the Integrated Services design relevant to multicast in general and
heterogeneous receivers specifically is the ability to set filter specifications. By
allowing this, hierarchical data may be possible. Hierarchically encoded data streams
are designed so that when less bandwidth is available, receivers can still get a usable
signal, though with lower fidelity. Filter specifications could reserve bandwidth for the
portion of the stream a lower-bandwidth receiver is capable of receiving.
The great challenge that RSVP presents and which is not yet fully understood deals
with ordering and merging reservations [IntServ Service Spec]. As yet no standards
are published, but there is at least one simulation reference [RSVP Multicast] and an
examination of some of the problems possible with multicast reservation mergers
[RSVP Killers].
Policy-enabled QoS
QoS provides differentiation of traffic and the services provided to that traffic. This
means that some traffic gets improved service and (inevitably) other traffic gets
degraded service. Naturally, everyone would want the improved service for most of
their traffic, but everyone can’t have it (or at least not for free). Thus, QoS has a need
for policy (the decision about which flows are entitled to which service) and policy
creates a need for user authentication (to verify user identification).
Among the QoS protocols, only RSVP has explicit provisions for policy support,
which we describe next. With other QoS protocols, policy is applied at network
border location, which may be located either at a layer transition in a TCP/IP stack
implementation (for example, as a layer 3 IP packet is passed to a layer 2 network
driver) based on identifiable characteristics of the packet. We describe these border
locations and their use of policies to define varying services in other QoS Forum
papers on Policy.
Policy objects contain an option list and policy element list. The options are either a
FILTER_SPEC object to preserve the original flow/policy association or a SCOPE
object to prevent “policy loops”. The policy elements are opaque and understood only
by the RSVP routers that use them; the Internet Assigned Numbers Authority (IANA)
will maintain a registry of policy element values and their meaning.
Conclusion
Until now, IP has provided a “best-effort” service in which network resources are
shared equitably. Adding quality of service support (QoS) to the Internet raises
significant concerns, since it enables differentiated services that represent a significant
departure from the fundamental and simple design principles that made the Internet a
success. Nonetheless, there is a significant need for IP QoS and protocols have
evolved to address this need.
These varied protocols and mechanisms and services are all designed to work
together. By mixing and matching their capabilities in a variety of possible
architectures, the goal of end-to-end and top-to-bottom QoS-enabled communications
is getting closer to reality every day. The standards are not fully developed yet, and
there are still some important considerations such as multicast support that require
further attention, but deployment is already underway on many IP networks.
References
[DiffServ AF] J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, “Assured Forwarding PHB Group”,
RFC 2597, June 1999
[DiffServ S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, “An Architecture for
Arch] Differentiated Services”, RFC 2475, December 1998
[DiffServ EF] V. Jacobson, K. Nichols, K. Poduri, “An Expedited Forwarding PHB”, RFC 2598, June
1999
[DiffServ K. Nichols, S. Blake, F. Baker, D. Black, “Definition of the Differentiated Services Field
Field] (DS Field) in the IPv4 and IPv6 Headers”, RFC 2474, December 1998
[e2e] J. Saltzer, D. Reed, D. Clark, End to End Arguments in System Design, ACM
Transactions in Computer Systems, November 1984. See
http://www.reed.com/Papers/EndtoEnd.html
[IEEE] The Institute of Electrical and Electronics Engineers (IEEE) is a formal body in which
network technologies such as Ethernet (among many other things) are specified and
standardized.
For information on the 802 LAN standards such as 802.1p, 802.1Q and 802.1D, see
http://standards.ieee.org/catalog/IEEE802.1.html
[IETF] The Internet Engineering Task Force is a loose confederacy of volunteers from the
network industry and academia that uses “running code and rough consensus” to
establish protocol standards for the Internet. See http://www.ietf.org
[IntServ S. Shenker, J. Wroclawski, Network Element Service Specification Template, RFC 2216,
Service Spec] Sept 1997
[IP] J. Postel, “Internet Protocol – DARPA Internet Program Protocol Specification”, RFC
791, September 1981
[MPLS BGP] Y. Rekhter, E. Rosen, “Carrying Label Information in BGP-4”, February 1999, <draft-
ietf-mpls-bgp4-mpls-02.txt>, Work in Progress
[MPLS LSPS] D-H Gan, T. Li, G. Swallow, L. Berger, V. Srinivasan, D. Awduche, “Extensions to
RSVP for LSP Tunnels”, March 1999, <draft-ietf-mpls-rsvp-lsp-tunnel-02.txt>, Work in
Progress
C-Y. Lee, K. Carlberg, B. Akyol, “Engineering Paths for Multicast Traffic using MPLS”,
June 1999, <draft-leecy-multicast-te-00.txt>, Work in Progress
[MPLS PIM] D. Farinacci, Y. Rekhter, E. Rosen, “Using PIM to Distribute MPLS Labels for Multicast
Routes”, June 1999, <draft-farinnacci-mpls-multicast-00.txt>, Work in Progress
[MPLS D.Awduche, D.Gan, T.Li, G.Swallow, V. Srinivasan, Extensions to RSVP for Traffic
RSVP] Engineering, August 1998, <draft-swallow-mpls-rsvp-trafeng-00.txt>, Work in Progress
[Queuing] Len Kleinrock has an extensive bibliography on traffic queuing and buffering at
http://millennium.cs.ucla.edu/LK/Bib/
Sally Floyd has information on queue management and her research on Class-based
Queuing (CBQ) at http://www.aciri.org/floyd/cbq.html and on RED at ../red.html
[RouterReqs] F. Baker, “Requirements for IP Version 4 Routers”, RFC 1812, June 1995
[RSVP F. Baker, “Aggregation of RSVP for IPv4 and IPv6 Reservations”, June 1999, <draft-
Aggregation] baker-rsvp-aggregation-01.txt>, Work in Progress
[RSVP Y. Bernet, “Usage and Format of the DCLASS Object with RSVP Signalling”, February
DCLASS] 1999, <draft-bernet-dclass-00.txt>, Work in Progress
[RSVP J. Wroclawski, “The Use of RSVP with IETF Integrated Services,” RFC 2210, September
IntServ] 1997
[RSVP S. Herzog, “RSVP Extensions for Policy Control”, April 1999, <draft-ietf-rap-rsvp-ext-
Policy] 05.txt>, Work in Progress
[TOS] Almquist, P. “Type of Service in the Internet Protocol Suite”, July 1992, RFC 1349