0% found this document useful (0 votes)
84 views

Openflow Mpls and The Open Source Label Switched Router

This document proposes an extension to OpenFlow 1.0 to support MPLS. It describes the design and implementation of OpenFlow MPLS, including a virtual port mechanism to support MPLS encapsulation and decapsulation. It also describes an open source MPLS label switched router (LSR) prototype built on NetFPGA hardware that uses OpenFlow MPLS and the Linux Quagga MPLS distribution for control plane functionality. Performance measurements show the prototype can achieve line speed forwarding without IP routing on the data plane.

Uploaded by

A K Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

Openflow Mpls and The Open Source Label Switched Router

This document proposes an extension to OpenFlow 1.0 to support MPLS. It describes the design and implementation of OpenFlow MPLS, including a virtual port mechanism to support MPLS encapsulation and decapsulation. It also describes an open source MPLS label switched router (LSR) prototype built on NetFPGA hardware that uses OpenFlow MPLS and the Linux Quagga MPLS distribution for control plane functionality. Performance measurements show the prototype can achieve line speed forwarding without IP routing on the data plane.

Uploaded by

A K Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

OpenFlow MPLS and the Open Source Label Switched

Router
James Kempf, Scott Whyte, Jonathan Ellithorpe, Peyman Kazemian,
Mart Haitjema, Neda Beheshti, Stephen Stuart, Howard Green
james.kempf@ericsson.com, swhyte@google.com, jdellit@stanford.edu,
peyman.kazemian@stanford.edu, mah5@cec.wustl.edu, neda.beheshti@ericsson.com,
sstuart@google.com, howard.green@ericsson.com

ABSTRACT
Multiprotocol Label Switching (MPLS) [3] is a protocol widely
1. INTRODUCTION
The OpenFlow 1.0 control protocol [1] provides a vendor agnostic
used in commercial operator networks to forward packets by
flow-based routing interface for controlling network forwarding
matching link-specific labels in the packet header to outgoing
elements. The essence of OpenFlow is the separation of the
links rather than through standard IP longest prefix matching.
control plane and data plane in routing and switching gear. In
However, in existing networks, MPLS is implemented by full IP
traditional routers and switches, the control and data plane are
routers, since the MPLS control plane protocols such as LDP [8]
tightly intertwined, limiting the implementation and deployment
utilize IP routing to set up the label switched paths, even though
options. Networks deployed with traditional routing and
the MPLS data plane does not require IP routing. OpenFlow 1.0 is
switching gear have a distributed control plane, and the control
an interface for controlling a routing or switching box by inserting
and data plane hardware and software for the routing and
flow specifications into the box’s flow table [1]. While
switching gear is contained in a single box. The OpenFlow
OpenFlow 1.0 does not support MPLS 1, MPLS label-based
interface simplifies the control plane on network forwarding
forwarding seems conceptually a good match with OpenFlow’s
hardware in which the control and data plane are bundled by
flow-based routing paradigm. In this paper we describe the design
providing a standardized interface between the control and data
and implementation of an experimental extension of OpenFlow
planes, simplifying the interface between the on-box control and
1.0 to support MPLS. The extension allows an OpenFlow switch
data plane software and hardware. Alternatively, the control plane
without IP routing capability to forward MPLS on the data plane.
can be deployed on a centralized controller that controls multiple
We also describe the implementation of a prototype open source
forwarding elements, or it can be deployed on a single forwarding
MPLS label switched router, based on the NetFPGA hardware
element like traditional routers and switches, but with OpenFlow
platform [4], utilizing OpenFlow MPLS. The prototype is capable
acting as a the control to data plane interface.
of forwarding data plane packets at line speed without IP
forwarding, though IP forwarding is still used on the control In this paper, we describe an extension of OpenFlow to
plane. We provide some performance measurements comparing incorporate MPLS and its use in implementing a open source
the prototype to software routers. The measurements indicate that MPLS label switched router (LSR). As far as we are aware, this is
the prototype is an appropriate tool for achieving line speed the first implementation of MPLS in OpenFlow 1.0. After a brief
forwarding in testbeds and other experimental networks where review of the OpenFlow 1.0 architecture, we describe a
flexibility is a key attribute, as a substitute for software routers. modification to the OpenFlow switch data plane model - the
virtual port - which supports MPLS encapsulation and
Categories and Subject Descriptors decapsulation. We briefly describe the design of OpenFlow
MPLS, and the hardware implementation, in NetFPGA [4].
C.2.6 [Computer-Communication Networks]: Internet- Extensions to OpenVSwitch [6], and the Stanford user space and
working—Routers; C.2.2 [Computer-Communication Linux kernel space OpenFlow reference software switch were
Networks]: Network Protocols; C.2.1 [Computer- also implemented but are not described here. An initial report of
Communication Networks]: Network Architecture and Design; this work appeared at MPLS 2010 [7]. We then describe the
C.2.5 [Computer-Communication Networks]: Local and Wide- control plane for the open source LSR that was constructed using
Area Networks; C.2.6 [Computer-Communication Networks]: the Linux Quagga MPLS distribution [5]. The MPLS Label
Internetworking. Distribution Protocol (LDP) [8] is used as a control plane for
distributing label switched paths in a network consisting of
General Terms standard IP/MPLS routers and a collection of OpenFlow MPLS
Design, Experimentation, Management, Performance. NetFPGA devices configured as LSRs. The on-box OpenFlow
controller programs the NetFPGA hardware using the labels
Keywords distributed by LDP. Unlike standard IP/MPLS networks, the
NetFPGA LSRs only utilize IP forwarding on the control plane, to
OpenFlow, MPLS, NetFPGA, open source LSR.
allow communication between the controller and the LSRs. All
data plane forwarding is done with MPLS. We provide
performance measurements comparing the open source LSR
1
switching performance with Quagga Linux software MPLS
The latest version of OpenFlow, OpenFlow 1.1, does contain forwarding performance. We then conclude the paper with some
support for MPLS. remarks about the future potential of OpenFlow MPLS.
2. OpenFlow MPLS Architecture
Since OpenFlow MPLS is built on top of OpenFlow 1.0, we
briefly review the OpenFlow architecture here, and compare it
with previous control/data plane separation work before
describing the OpenFlow MPLS architecture.

2.1 OpenFlow Architecture


In the canonical OpenFlow 1.0 architecture, the control plane in Fig. 2: Ten-tuple for Rule Matching
network switching and routing equipment is moved into a
separate controller. The controller communicates over a secure 2.2 Previous Work
channel with the switches through the OpenFlow protocol. Much previous work exists in the area of control/data plane
Software running on the controller “programs” the switches with separation for routing and switching. Most of the work is
flow specifications that control the routes of packets through the specifically directed at implementing the control plane on a
network. For routing purposes, the switches only need to run an separate box from the data plane.
OpenFlow control plane, considerably simplifying their A standard for the separation of the control and data plane in
implementation. An alternative architecture, shown in Fig. 1, circuit-switched networks is defined by RFC 3292 in the
utilizes OpenFlow as the interface between the control and data Generalized Switch Management Protocol (GSMP) [2]. GSMP
planes in the same box, while the control plane talks standard IP models the network elements as cross-bar circuit switches.
routing protocols with standard network routers and switches. In Particularly in optical circuit-switched networks, the switches
this architecture, OpenFlow’s flow-based routing design often have physically separate control networks since it is often
simplifies the on-box control plane/data plane interface. The open not possible to intercept the control packets from the optical
source LSR is designed according to the latter architecture. lambdas, so separation of control and data plane becomes a
necessity, GSMP provides a standardized protocol for those
systems. Many vendors also have proprietary control protocols.
The FORCES protocol [10] provides a standard framework for
controlling data plane elements from a separate controller, but
unlike OpenFlow, FORCES does not define a protocol for
controlling a specific forwarding element. The FORCES
forwarding element model [11] is quite general. OpenFlow, in
contrast, defines a specific forwarding element model and
protocol involving a flow table and ports. FORCES requires each
logical forwarding block in the forwarding element to define its
own control protocol within the FORCES framework.
The OpenFlow architecture is perhaps closest to the Clean Slate
4D architecture defined by Greenberg, et. al. [12] The 4D
architecture re-factors the network control, data, and
management planes into 4 new planes (the “D”s in the name): the
decision plane, the dissemination plane, the discovery plane, and
the data plane. The data plane is much as before, the decision
Fig. 1: Single Box OpenFlow Routing Architecture plane is the OpenFlow controller software, for example NOX
The switch data plane in OpenFlow is modeled as a flow table in [15], and the dissemination plane is provided by the OpenFlow
which there are three columns: rules, actions, and counters. The protocol. The OpenFlow architecture defines no special support
rules column defines the flow. Rules are matched against the for discovery. In deployed OpenFlow systems, the controller
headers of incoming packets. If a rule matches, the actions from provides this function through legacy protocols such as LLDP
the action column are applied to the packet and the counters in the [18]. The Tesseract system [13] implements the 4D architecture.
counter column are updated. If a packet matches multiple rules, GMPLS [19] provides another architectural approach to
the rule with the highest priority is applied. Each rule consists of control/data plane separation by extending MPLS to networks
elements from a ten-tuple of header fields (see Fig. 2, from [9]) or including circuit switches. GMPLS utilizes extensions of
a wild card ANY. The set of possible actions are: forward as if intradomain routing protocols to perform topology discovery, and
OpenFlow were not present (usually utilizing the Ethernet RSVP and LMP to establish label-switched paths between
spanning tree route), forward to the control plane, forward out a network elements. A network element under GMPLS control can
specific port, and modify various header fields (e.g. rewrite MAC either also perform forwarding, in which case GMPLS acts as the
address, etc.). control plane for a standard switch or a router, or the network
element can control separate forwarding elements through a
different forwarding element control protocol. If the latter, a
separate switch control protocol, such as GSMP, controls the
switches. GMPLS is restricted to transport networks, it does not
provide support for IP routing even though it uses IP intradomain
routing protocols for connectivity discovery.
There has also been work on control/data plane interfaces for
conventional router implementations when the control and data
plane are implemented on the same box. The Click modular router
toolkit [14] defines interfaces between data plane components that
allow modular data planes to be built, but Click does not specify
any control plane interface. The control plane interface is hidden
behind the individual Click elements. The Xorp router platform
[15] defines a composable framework of router control plane
processes, each of which is itself composed of modular processing
stages. Xorp defines an interface to the data plane through the
Forwarding Engine Abstraction (FEA). The FEA interface allows
different types of data plane implementations, for example Click
or the NetBSD data plane, to be coupled to the control plane
without having to change the entire control plane software base.
The Conman architecture [16] defines a modular data plane and
control plane architecture with a well defined pipe interface
between the two. Our work differs from prior work in this area in
that we have taken an interface that was defined for simplifying
and centralizing the control plane and instead implemented it as
the control/data plane interface on a single box, providing a more
flexible deployment model for cases where a centralized control
plane is impractical.

2.3 OpenFlow MPLS


2.3.1 OpenFlow MPLS Design
MPLS forms flow aggregations by modifying the packet header to
include a label. The label identifies the packet as a member of a
forwarding equivalence class (FEC). A FEC is an aggregated
group of flows that all receive the same forwarding treatment.
Fig. 4: Virtual Port Table and Virtual Port Table Entry
A data plane MPLS node implements three header modification
operations: The next required modification was the addition of the MPLS
header modification actions (push, pop, and swap) to the action
• Push: Push a new label onto the MPLS label stack, or, if set executed when a rule matches. With the exception of limited
there is no stack currently, insert a label to form a new stack, field rewriting, OpenFlow 1.0 actions perform simple forwarding.
• Pop: Pop the top label off the MPLS label stack, The MPLS push and pop actions, in contrast, rewrite the header
by inserting fields into the header. Rather than inserting the
• Swap: Swap the top label on the stack for a new label. MPLS protocol actions into the basic OpenFlow packet
processing pipeline, we chose instead to isolate them using a
abstraction called a virtual port. A virtual port is an abstraction
mechanism that handles complex protocol specific actions
requiring header manipulation, thereby hiding the complexity of
the implementation. This allows yet more complex header
manipulations to be implemented by composing them out of
simpler virtual port building blocks.
Fig. 3: OpenFlow Twelve-tuple for MPLS rules
Virtual ports can be hierarchically stacked to form processing
The MPLS label stack is inserted between the IP and MAC (Layer chains on either input or output. On output, virtual ports can be
3 and Layer 2) headers in the packet. MPLS label stack entries included in flow table actions just like physical ports. Virtual
consist of 32 bits, 20 of which form the actual label used in ports are grouped together with physical ports into a virtual port
forwarding. The other bits indicate QoS treatment, top of stack, table. Fig. 4 illustrates the virtual port table, together with a table
and time to live. row. Each virtual port table row contains entries for the port
number, the parent port, the actions to be performed by the virtual
The first modification required to OpenFlow is to increase the
port, and statistics.
size of the tuple used for flow identification. In principle, the size
of the MPLS label stack has no upper bound, but as a practical The MPLS actions in the virtual port table consist of the
matter, most carrier transport networks use a maximum of two following:
labels: one label defining a service (such as VPN) and one label
defining a transport tunnel. We therefore decided to extend the • push_mpls: Push a 32 bit label on the top of the MPLS label
header tuple used for flow matching from 10 fields to 12. Only stack , and copy the TTL and QoS bits from the IP header or
the actual 20 bit forwarding label is matched, the other bits are not previous MPLS label,
included. Fig. 3 shows the 12 tuple.
• pop_mpls: Pop the top label on the MPLS stack, and copy the matches a virtual port. If it does, the virtual port header
TTL and QoS bits to the IP header or previous MPLS label, manipulation actions are performed.
• swap_mpls: Swap the 20 bit forwarding label on top of the In the OpenFlow MPLS implementation, virtual ports implement
MPLS stack. the MPLS actions: push a new label, pop the top of the stack
label, decrement the TTL, copy the TTL and copy the QoS bits.
• decrement_ttl: Decrement the TTL and drop the packet if it As an optimization, the swap operation is handled by an
has expired. OpenFlow rewrite action instead of in the virtual port. If the
• copy_bits: Copy the TTL and QoS bits to/from the IP header copy_bits action is performed during a push operation, it copies
or previous MPLS label the TTL/QoS bits from previous MPLS label, and if it is done as
part of pop operation, the TTL/QoS bits of current label are
We also added a counter to the OpenFlow statistics that is copied to the previous label. If only one MPLS label exists, IP
incremented every time a virtual port drops a packet due to the TTL or IP ToS is the source or target instead. The decrement_ttl
expiration of the TTL. action decrements the TTL value for the top of the stack label and
The OpenFlow protocol was extended with the following drops the packet when the label value hits zero. To decrement the
messages to allow the controller to program label switched paths MPLS TTL, without any push/pop operation or as part of a swap
(LSPs) into the switches: action, the packet is forwarded to a pop virtual port with the pop
and copy TTL/QoS functionality disabled.
• vport_mod: Add or remove a virtual port number. Parameters
are the parent port number, the virtual port number, and an Virtual ports can be concatenated together for up to two layers to
array of virtual port actions, perform two push or two pop operations in one NetFPGA card.
The last virtual port in the chain forwards the packet to a physical
• vport_table_stats: Return statistics for the virtual port table. port on output, or the first virtual port accepts a packet from a
The statistics include maximum virtual ports supported by physical port on input.
the switch, number of virtual ports in use, and the lookup
count, port match count, and chain match count.
• port_stats: The OpenFlow port_stats message applies to
virtual ports as well, but only the tx_bytes and tx_packets
fields are used.
Finally, the OpenFlow switch_features_reply message was
modified to include a bit indicating whether the switch supports
virtual ports.

2.3.2 NetFPGA Implementation


NetFPGA is a PCI card that contains a Virtex-2 Xilinx FPGA, 4
Gigabit Ethernet ports, SRAM and DDR2 DRAM [4]. The board
allows researchers and students to build working prototypes of
line speed network hardware.
The MPLS implementation extends the OpenFlow 10 tuple with
two additional fields for MPLS labels, and adds virtual port
functionality to support MPLS-specific actions. Fig. 5 shows the
functional block diagram of the NetFPGA design. Our
implementation of OpenFlow MPLS in NetFPGA is based on the
OpenFlow v0.89 reference implementation for NetFPGA.
OpenFlow v0.89 differs slightly from OpenFlow 1.0 in that
OpenFlow 1.0 supports VLAN type and IP ToS headers whereas
v0.89 doesn’t. We used v0.89 because it was available at the time Fig. 5: OpenFlow-MPLS on NetFPGA Block Diagram
the work was done, since these features aren’t necessary for the The last 8 positions in the wildcard table are always filled by
open source LSR and would have taken up valuable FPGA default entries to handle forwarding unmatched packets to the
memory (only 5% of NetFPGA Virtex-2 FPGA remained empty control plane. For each of the 4 NetFPGA ports, there is one entry
after implementing MPLS OpenFlow), we decided not to update. at the bottom of the wildcard table that has everything except an
As packets arrive, a lookup key is created by concatenating the 12 incoming port wildcarded. If a packet doesn’t match any other
fields together. The lookup key is used in parallel by two lookup entry in the table, it will at least match that default entry and is
engines, one performing exact match using two CRC hash forwarded to the DMA port corresponding to its input port. The
functions and the other one doing wildcard match using a TCAM. packet is then received by the OpenFlow kernel module running
Each of the exact and wildcard tables has 32 entries. The result of on the host machine and is forwarded to the control plane.
these lookup operations is fed into a match arbiter that always Similarly, packets coming from the control plane, are sent out on
prefers an exact match to a wildcard match. The OpenFlow a DMA port by the OpenFlow kernel module, and are received by
actions associated with the match are then performed. If the action NetFPGA. There are 4 default rules in the wildcard table that
involves a port, the port number is checked to see if the number match on the packets coming from each of the 4 DMA ports and
forward them to corresponding output port.
3. Open Source Label Switched Router against the MPLS Linux to demonstrate the performance
advantage. The tests measured bidirectional throughput for
3.1 OpenFlow-MPLS LDP Control Plane packets of 68 byes or 1504 bytes in length on a single port. The
As a demonstration of OpenFlow MPLS, we built a low-cost label results are shown in Fig. 7. As should be clear from the figure,
switched router (LSR) consisting of a NetFPGA board running there is 2 orders of magnitude difference in forwarding
OpenFlow MPLS in a PC running the Linux OpenFlow driver, performance between the NetFPGA and MPLS Linux. In
and the Quagga open source routing stack, including Quagga LDP addition, the performance of the NetFPGA LSR was constant
and MPLS Linux [22]. In this model, the OpenFlow controller regardless of packet size, whereas the performance of MPLS
runs on the same box as the NetFPGA LSR and acts as the control Linux decreased for smaller packets.
plane only for the NetFPGA on the box as is the case in standard
routers and switches, in contrast to the canonical OpenFlow The NetFPGA was able to maintain line speed performance up to
model discussed in Section 2.1. Fig. 6 contains a block diagram of 3 ports, but scaled down to 6 Giga packets/second at 4 ports. This
the open source LSR. The entire bill of materials for the open limitation had nothing to do with the MPLS implementation, other
source LSR was around $2000. NetFPGA applications exhibit the same performance profile. Note
that many carefully coded and highly optimized software routers
LDP, the Label Distribution Protocol [8], connects the open are able to achieve much better performance than MPLS Linux
source LSR with other forwarding elements in the network. LDP exhibited in this study, but our objective here is not to compare
allows two devices to form an adjacency and establish label the best software router with hardware, but rather to show the
bindings for label switched paths between them. An LDP open source LSR provides good, reasonably scalable performance
neighbor sends a LDP packet to the open source LSR in-band on in comparison with a widely available, off the shelf software
one of its connected interfaces. The open source LSR identifies implementation.
the packet as part of a LDP flow and forwards it to the control
plane on the box, where it is sent to the Quagga LDP daemon. As 1200
is the case for IP-MPLS routers, the open source LSR exchanges
OSPF route information with external routers so that MPLS paths 1000 1000
can be established along known IP routes. 1000

800
Megabits/Second

open source LSR


600
mpls-linux

400

200
100
10
0
68 Bytes 1504 Bytes
Packet Size

Fig. 7: Comparative Forwarding Performance


We also performed a test to verify that the open source LSR could
be used in a network consisting of standard IP/MPLS routing gear
running the standard IP/MPLS LDP/OSPF control plane. The
Fig. 6: Open source LSR Block Diagram
network, shown in Fig. 8, consisted of two standard IP/MPLS
The LDP Daemon maintains and builds a normal LDP adjacency. routers, the SmartEdge100 [20] and the Juniper M10. All devices
Once LDP has formed an adjacency and completed a label were running OSPF and LDP. The open source LSR was able to
binding, it updates the kernel MPLS LFIB with the corresponding exchange OSPF and LDP messages with the IP/MPLS routers to
label information. The LSP Synchronizer is a user level daemon set up LSPs through the network.
that polls the MPLS LFIB in the kernel periodically for changes,
Finally, we set up a test network to verify that it is possible to
and when it detects a change it pushes a an OpenFlow flow
perform MPLS forwarding within a core network without
modification into the NetFPGA, enabling data plane packets
requiring iBGP. The core test network is shown in Fig. 9. Again,
received with those labels to be forwarded correctly in hardware.
all devices speak OSPF and LDP. Two Juniper M10 routers
function as label edge routers (LERs) and speak iBGP. The iBGP
3.2 Performance Measurements and packets are routed through the network of open source LSRs. Two
Interoperability Verification hosts serve as sources and destinations of traffic. The LERs push
The open source LSR is primarily a tool for prototyping new and pop MPLS labels onto/off of the host traffic packets to route
ideas in networking, in that it offers line speed performance with through the open source LSR core. Note that while OpenFlow
the flexibility to change control plane and data plane software. MPLS is designed to allow an OpenFlow MPLS switch to act as
Consequently, we performed a couple of simple performance tests an LER too, in this case, we wanted to use the M10s to
demonstrate how the open source LSR could be used to set up a a prototype iBGP free core network was set up that performs
iBGP free core. The addition of a BGP module to the control MPLS forwarding without the need for IP routing or iBGP
plane on the open source LSR would allow it to act as an LSR. speakers. The network consisted of two hosts connected up to
routers and a collection of open source LSRs. No interoperability
problems were found.
Going forward, the success of MPLS in OpenFlow 1.0 has led to
the incorporation of MPLS into the next version of OpenFlow,
OpenFlow 1.1. Support for MPLS in the canonical OpenFlow
centralized control plane model is necessary to utilize MPLS in
OpenFlow 1.1. For the open source LSR model, the current
NetFPGA 1G only supports a realistic maximum of 32 flows,
which is really too few for production use, even in a small campus
testbed. The NetFPGA 10G [21] is a much better platform and is
the target for future work. In addition, a port to NetBSD is
planned, since the MPLS implementation in NetBSD is more
stable. Code for the open source LSR is available at
Fig. 8: Interoperability Test Network http://code.google.com/p/opensource-lsr.

5. ACKNOWLEDGMENTS
The authors would like to thank Andre Khan for his founding
contribution during the initial phases of the design, and Nick
McKeown for his helpful direction during the design of the virtual
port abstraction.

6. REFERENCES
[1] “OpenFlow: Enabling Innovation in Campus Networks”,
McKeown, N., et. al., March, 2008,
http://www.openflowswitch.org//documents/openflow-wp-
Fig. 9: Core Interoperability Test Network latest.pdf.
[2] Doria, A., Hellstrand, F., Sundell, K., Worster, T., ”General
4. Summary and Conclusions Switch Management Protocol (GSMP)”, RFC 3292, June
In this paper, we described an extension of the OpenFlow 2002.
forwarding element control interface and model to include MPLS.
The extension involves modifying the OpenFlow flow table [3] Rosen, E., Viswanathan, A., and Callon, R., “Multiprotocol
entries to include two MPLS labels and defining an extension of Label Switching Architecture”, RFC 3031, Internet
the OpenFlow port model to allow definition of virtual ports. The Engineering Task Force, January, 2001.
virtual ports implement the MPLS per packet processing [4] http://www.netfpga.org.
operations. An extension to the OpenFlow protocol allows the
controller to program the OpenFlow MPLS forwarding element to [5] http://www.quagga.net.
match and process MPLS flows.
[6] http://openvswitch.org.
The extension to OpenFlow was implemented on OpenVSwitch,
the Stanford reference software switch (both user and kernel
[7] http://www.isocore.com/mpls2010/program/abstracts.htm#w
ed1_5.
modules), and on the NetFPGA hardware. The NetFPGA
implementation supports up to 2 virtual ports for MPLS. A [8] Andersson, L., Minei, I., Thomas, B., “LDP Specification”,
demonstration system, the open source LSR, was built using RFC 5036, Internet Engineering Task Force, October, 2007.
OpenFlow as the control/data plane interface for a NetFPGA
running on the same PC as the control plane. This use of [9] OpenFlow Switch Specification V1.0.0,
OpenFlow is in contrast to the canonical architecture in which a http://www.openflowswitch.org/documents/openflow-spec-
whole network of switches is controlled by one OpenFlow v1.0.0.pdf.
controller. [10] Doria, A., Ed., Salim, J., Ed., Haas, R., Ed., Khosravi, H.,
Some simple performance tests were run comparing the open Ed., and Wang, W., Ed., “Forwarding and Control Element
source LSR to MPLS Linux for prototyping purposes. The tests Separation (ForCES) Protocol Specification”, RFC 5810,
demonstrated that the open source LSR could significantly Internet Engineering Task Force, March 2010.
improve the performance of forwarding in prototype networks. [11] Halpern, J., and Salim, J., “ForCES Forwarding Element
An interoperability demonstration system was built using the Model”, RFC 5812, Internet Engineering Task Force, March
open source LSR and two standard Internet routers capable of 2010.
MPLS routing. The routers exchange LDP with the open source
LSRs and each other to set up LSPs through the network. Finally,
[12] Greenberg, A. et. al., “A Clean Slate 4D Approach to
Network Control and Management,” Proceedings of ACM
SIGCOMM, 2005.
[13] Yan, H., Maltz, D., Ng, E., Gogineni, H., Zhang, H., and
Cai, Z., “Tesseract: A 4D Network Control Plane”,
Proceedings of the 4th USENIX Symposium on Networked
Systems Design & Implementation, pp. 369–382, March,
2007.
[14] Kohler, E., Morris, R., Chen, B., Jannotti, J., and Kasshoek,
F., “The Click Modular Router”, Operating Systems Review,
34(5), pp 217{231, December, 1999.
[15] Handley, M., Hodson, O., and Kohler, E., “XORP goals and
architecture” , Proceedings of the ACM SIGCOMM Hot
Topics in Networking, 2002.
[16] Ballani, H., and Francis, P., “CONMan: A Step Towards
Network Manageability”, Proceedings of the ACM
SIGCOMM Workshop on Internet Network Management,
September, 2006.
[17] Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M.,
McKeown, N., Shenker, S., “NOX: Towards an Operating
System for Networks”, Computer Communications Review,
July, 2008.
[18] IEEE standard 802.1ab, “802.1ab rev – Station and Media
Access Control Connectivity Discovery”, September, 2009.
[19] Farrel, A. ad Bryskin, I., GMPLS: Architecture and
Applications, Morgan Kaufmann Publishers, Amsterdam,
412pp., 2006.
[20] http://www.ericsson.com/ourportfolio/network-
areas/se100?nav=networkareacategory002%7Cfgb_101_504
%7Cfgb_101_647.
[21] http://netfpga.org/foswiki/NetFPGA/TenGig/Netfpga10gInitI
nfoSite.
[22] http://sourceforge.net/apps/mediawiki/mpls-
linux/index.php?title=Main_Page.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy