Cisco multicast vol 1-pub 2016
Cisco multicast vol 1-pub 2016
Figure 1-10 All-Hosts Broadcast Packet (Indicated by 255.255.255.255 as the Destination IPv4
Address and ffff.ffff.ffff as the Destination MAC Address)
If a broadcast is directed, it is sent through the network toward the intended subnet and replicated
by any switch on that subnet, as shown in Figure 1-11 .
Figure 1-11 Directed Broadcast Packet (Indicated by the 10.2.2.255 Destination IPv4 Address,
or All Hosts on the 10.2.2.0 Subnet)
The difference between a multicast and a broadcast with hosts on a single subnet is subtle. What if
only a few of the hosts on the subnet need to receive the packets? Using a group address to which
hosts can subscribe would ease the burden of sending packets to the select hosts on that segment,
reduce replication overhead, and use the bandwidth only in the LAN where the hosts are located.
Figure 1-12 illustrates just such a scenario.
Figure 2-2 Layer 2 and Layer 3 Transport Process on the Local Segment
Before the sender can communicate with the default gateway, it must know the media access
control (MAC) address of that device. Because the destination is on a different segment, the sender
will need to discover the MAC address of the default gateway (IP address 10.1.1.1) using an Address
Resolution Protocol (ARP) request. The default gateway responds to the ARP request with its MAC
address. Finally, the sender has enough information to encapsulate the data with the destination IP
address of Host A and the MAC addresses of the default gateway, as shown in Figure 2-2 .
The default gateway or router has Layer 3 IP routing information that determines where Host A is
physically connected. This information determines the appropriate outgoing interface to which the
message should be sent. The router should already know the MAC address of the neighbor router if
there is an established routing protocol adjacency. If not, the same ARP request process is conducted.
With this information, the router can now forward the message. Understand that both Layer 2
addresses (SA and DA) change at each logical hop in the network, but the Layer 3 addresses never
change and are used to perform route lookups.
When the packet is forwarded to the final router, that router must do a lookup and determine the
MAC address of the destination IP. This problem exists in part because of the historical implications
of Ethernet. Ethernet is a physical medium that is attached to a logical bus network. In a traditional
bus network, many devices can be connected to a single wire. If the gateway router does not have an
entry from a previous communication, it will send out an ARP request and finally encapsulate with the
destination MAC address of the host, as shown in Figure 2-3 .
Figure 2-3 Layer 2 and Layer 3 Transport Process on the Destination Router
After the final router properly encapsulates the message, it is the responsibility of the switch to
send the packet to the appropriate host, and only to that host. This is one of the primary functions of a
traditional Layer 2 switch—to discover the location of devices connected to it. It does this by
cataloging the source MAC addresses in frames received from connected devices. In this way, the
switch builds a table of all known MAC addresses and keeps Ethernet networks efficient by making
intelligent Layer 2 forwarding decisions.
This process is easy to understand for the unicast packet shown. Items to consider while you read
this chapter include the following:
What happens if the packet is a multicast packet and many hosts connected to a switch are
subscribed to the destination multicast group?
Can a switch still make efficient forwarding decisions if there are multiple ports that require a
copy of the packet (meaning there are multiple endpoints on multiple segments that need a copy of the
frame)?
If the MAC address in a frame is not the physical address of the host, will it process the packet,
assuming it is not the intended recipient?
How do you identify multicast groups at Layer 2?
MAC Address Mapping
A traditional Ethernet switch (Layer 2 device) works with Ethernet frames, and a traditional router
(Layer 3 device) looks at packets to make decisions on how messages will be handled. As discussed
in Chapter 1 , when a device sends a broadcast frame, the destination address is all ones, and a
unicast message is the destination MAC address. What happens when it is a multicast message? To
optimize network resources, an Ethernet switch also needs to understand multicast. This is where the
magic happens. The sending device must convert the destination IP multicast address into a special
MAC address as follows:
The high-order 25 bits is the official reserved multicast MAC address range from
0100.5E00.0000 to 0100.5E7F.FFFF (request for Comment 1112). These bits are part of the
organizational unit identifiers (OUI).
The lower-order 23 bits of the destination IP multicast address are mapped to the lower-order 23
bits of the MAC address.
The high-order 4 bits for the destination IP multicast address are set to 1110 binary (0b1110).
This represents the Class D address range from 224.0.0.0 (0b11100000) to 239.255.255.255
(0b11101111).
Of the 48 bits used to represent the multicast MAC address, the high-order 25 bits are reserved
as part of the OUI, and the last 23 bits of the multicast IP address are used as the low-order bits, as
shown in Figure 2-4 .
Figure 2-4 Layer 2 Multicast Address Format
A switch can use this calculated multicast MAC address to distinguish a frame as a multicast and
make efficient forwarding decisions. End hosts can listen for frames with a specific multicast MAC,
allowing them to process only those multicast streams to which they have subscribed. There’s a small
wrinkle in this process, however.
Did you notice a slight challenge with the number of IP addresses and MAC addresses? Five bits
of the IP address are overwritten by the OUI MAC address. This causes a 32-to-1 IP multicast
address-to-multicast MAC address ambiguity (25 = 32).
This means that a host subscribing to a multicast stream could potentially receive multiple
multicast streams that it did not subscribe to, and the host will have to discard the unwanted
information. A host subscribing to the multicast stream of 224.64.7.7 would map to MAC address of
0x0100.5E40.0707, and so would 225.64.7.7 and 224.192.7.7. It all boils down to 1s and 0s. Figure
2-5 shows the ambiguity. The “X” in the binary row represents the bits that are overwritten and
shows how 32 multicast IP addresses map to a single multicast MAC address.
Figure 2-5 Layer 2 Multicast MAC Address Overlap
Switching Multicast Frames
Layer 2 switches send frames to a physical or logical interface based on the destination MAC
address. Multicast MAC addresses are a different animal than unicast MAC addresses, because a
unicast MAC address should be unique and have only a single destination interface. Multicast MAC
frames may have several destination interfaces, depending upon which devices have requested
content from the associated IP multicast stream.
Before the Layer 2 switch can forward multicast frames, it must know the destination interfaces on
which those messages should be sent. The list of destination interfaces includes only those interfaces
connected to a device subscribed to the specific multicast flow. The destination can be added as
static entries that bind a port to a multicast group, or the switch can use a dynamic way of learning
and updating the ports that need to receive the flow.
There are several ways in which a Layer 2 switch can dynamically learn where the destinations are
located. The switch may use Cisco Group Management Protocol (CGMP) or Internet Group
Management Protocol (IGMP) snooping for IPv4 multicast. These methods will be discussed later in
this chapter.
If a Layer 2 switch does not have a mechanism to learn about where to send multicast messages, it
treats all multicast frames as broadcast, which is to say it floods the packet on every port or VLAN
port! As you can imagine, this is a very bad thing. Many networks have melted down due to large
multicast streams. For example, when sending computer operating system image files, a tremendous
amount of data is sent to every device in the broadcast domain, every computer, router, printer, and so
on. The unfortunate side effect of these messages is that network performance may be affected in
locations on the network that do not need the multicast stream. How could this happen if these are
broadcast messages and will not go beyond the local network? These messages will not go beyond
any local Layer 3 devices, but local Layer 3 devices must process each one of the broadcast
messages. While the Layer 3 device is inundated processing these messages, it may not have the
available cycles to process other more important messages, such as routing updates or spanning-tree
messages. As you can imagine, or may have already experienced, this can impact or “melt-down” the
entire network.
Group Subscription
You have seen that in order for IP multicast forwarding to work on the local segment and beyond,
switches and gateway routers need to be aware of multicast hosts interested in a specific group and
where those hosts are located. Without this information, the only forwarding option is to flood
multicast datagrams throughout the entire network domain. This would destroy the efficiency gains of
using IP multicast.
Host group membership is a dynamic process. When a host joins a multicast group, there is no
requirement to continue forwarding group packets to the segment indefinitely, nor is group
membership indefinite. The only way to manage alerting the network to a multicast host location is to
have multicast host group members advertise interest or membership to the network. Figure 2-6
depicts a high-level example of this requirement, known as a join .
Figure 4-6 Static RP. The Icons Represent Layer 3 Functionality, Including IOS, IOS-XR, and
NxOS
The following are step-by-step configuration instructions to enable PIM sparse-mode for IOS, IOS-
XR and NX-OS using static RP.
The steps to configure static RP with IOS and enabling PIM sparse-mode are as follows:
Step 1. Enable IP multicast routing.
ip multicast-routing
Step 2. Enable interfaces in the Layer 3 domain, including the loopback with the ip pim sparse-
mode command:
Click here to view code image
interface Loopback0
ip address 192.168.0.1 255.255.255.255
ip pim sparse-mode
!
interface Ethernet0/0
ip address 192.168.12.1 255.255.255.0
ip pim sparse-mode
Step 3. Add the static RP configuration:
Click here to view code image
R3(config)#ip pim rp-address 192.168.0.1 ?
<1-99> Access-list reference for group
<1300-1999> Access-list reference for group (expanded range)
WORD IP Named Standard Access list
override Overrides dynamically learnt RP mappings
Configuring IOS-XR is significantly different from configuring IOS. The PIM configuration is
accomplished through separate multicast-routing and router pim configuration modes. The
following basic steps explain how to configure PIM using a static RP with PIM sparse-mode in IOS
XR:
Step 1. Enable multicast-routing and enable Layer 3 interfaces:
Click here to view code image
multicast-routing
address-family ipv4
interface Loopback0
!
interface GigabitEthernet0/0/0/0
!
interface GigabitEthernet0/0/0/1
!
interface all enable
Step 2. Enable interfaces in the Layer 3 domain, including the loopback with the ip pim sparse-
mode command:
Click here to view code image
router pim
address-family ipv4
interface Loopback0
!
interface GigabitEthernet0/0/0/0
!
interface GigabitEthernet0/0/0/1
Step 3. Add the static RP configuration:
Click here to view code image
RP/0/0/CPU0:R1(config-pim)#router pim
RP/0/0/CPU0:R1(config-pim)#address-family ipv4
RP/0/0/CPU0:R1(config-pim-default-ipv4)#rp-address 192.168.0.1?
WORD Access list of groups that should map to given RP
bidir Specify keyword bidir to configure a bidir RP
override Static RP config overrides auto-rp and BSR
<cr>
The static RP configuration in NX-OS is similar to IOS and IOS-XE configurations and is as
follows:
Step 1. Enable the feature pim .
feature pim
Step 2. Enable interfaces in the Layer 3 domain, including the loopback with ip pim sparse-mode :
Click here to view code image
interface Ethernet2/1
no switchport
mac-address 0001.4200.0001
ip address 192.168.23.1/24
ip router ospf 1 area 0.0.0.0
ip pim sparse-mode
no shutdown
Step 3. Add the static RP configuration:
Click here to view code image
nexusr1(config)# ip pim rp-address 192.168.0.1 ?
<CR>
bidir Group range is treated in PIM bidirectional mode
group-list Group range for static RP
override RP address will override the dynamically learnt RPs
prefix-list Prefix List policy for static RP
route-map Route Map policy for static RP
Note
Why is it important to configure the main network loopback interface with sparse-mode PIM as
shown in the preceding examples? After all, the loopback interface is unlikely to have any PIM
neighbors. This is a recommended practice for any multicast overlay. The reason for this
recommendation is that the router can then fully participate in the multicast domain, even if errors are
occurring on leaf facing interfaces. It also allows the loopback interfaces to be used as a RP
addresses or mapping agents in dynamic RP propagation, making them more reliable. See Chapter 5 ,
“IP Multicast Design Considerations and Implementation ,” for more information on this and other
best practices for multicast networks.
PIM Dense Mode
To configure dense mode on older Cisco IOS routers and switches, use the following commands:
Click here to view code image
C6K-720(config)#ip multicast-routing [vrf vrf-name] [distributed]
C6K-720(config)#interface {type [number|slot/port[/port]]}
C6K-720(config-if)#ip pim dense-mode [proxy-register {list access-list |
route-map map-name}]
Figure 4-7 depicts a small campus network with a very limited multicast deployment for minor host
updates. The underlying configuration example enables PIM dense-mode multicast and utilizes IGMP
snooping for improved Layer 2 efficiency. IGMP snooping should be enabled by default.
Figure 4-7 Small Dense-Mode Deployment
Note
As discussed previously, there are very few reasons to ever deploy a PIM-DM network. Because
of this, many Cisco networking operating systems will not support dense-mode configuration or
certain dense-mode features. At presstime, Cisco IOS-XR and NX-OS do not support any PIM dense-
mode–deployment or configurations. The following sample configuration is only provided as
supplemental for existing dense-mode–compatible systems.
Using the network topology in Figure 4-7 , the IOS configuration commands in Example 4-3
demonstrate how to configure dense-mode multicast.
Example 4-3 Configuring Dense-Mode Multicast in IOS
Click here to view code image
CR1(config)#ip multicast routing
CR1(config)#interface vlan 10
CR1(config-if)#ip pim dense-mode
CR2(config)#ip multicast routing
CR2(config)#interface vlan 10
CR2(config-if)#ip pim dense-mode
Dynamic RP Information Propagation
There are two ways to dynamically propagate RP information to routers within a domain:
Auto-RP (Cisco proprietary)
Bootstrap router (BSR)
Both solutions are acceptable because they provide a similar service and play a key role in a
multicast design.
You may be asking yourself, “If static configurations work well, why have a dynamic protocol at
all?” As discussed earlier, one of the most important concepts in group mapping is that all routers
within a domain agree on the RP for a given group. If a network is very large, has many overlapping
domains, many multicast applications, many rendezvous points, or all of the above, a consistent group
mapping through static commands may become extremely difficult to manage. In fact, it is for this
reason that these two dynamic protocols provide not only dynamic propagation, but also methods of
ensuring RP mapping accuracy and consistency throughout a domain at all downstream routers.
Remember, the RP itself does not make mapping decisions for downstream routers. Each router must
learn of the RP individually and use the provided information to determine the best RP to map a group
to. There are some similarities between Auto-RP and BSR that provide this consistency to
downstream routers.
The control for this process is accomplished through the concept of centralized mapping. This
means that some routers in the network are configured as RP routers and advertise themselves as such
to other routers in the network. Centralized mapping routers receive the information about the RPs
available within the network and establish group to RP mapping parameters, or compile available RP
sets. When RP information is distributed from the centralized mapping routers, downstream routers
need only listen to these advertisements and use the advertised information to create local RP
mapping entries. This also serves the added purpose of limiting the number of protocol messages
required throughout the domain.
Auto-RP and BSR both perform these mapping and advertising functions differently. But in the end,
they provide the same essential functions to the network.
Auto RP
Auto-RP provides HA (active/standby) to the RP service. The propagation of RP information to the
downstream routers is done via Auto-RP messages. The downstream routers do not require an
explicit RP configuration. Rendezvous points using Auto-RP announce their availability to the
mapping agents via the 224.0.1.39 multicast group. The RP mapping agent listens to the announced
packets from the RPs, then sends RP-to-group mappings in a discovery message to 224.0.1.40.
Downstream routers listen for mapping advertisements on group 224.0.1.40 and install the RP
mappings as advertised from the mapping agent. It is acceptable to use the same interface address RP
as a candidate and mapping agent. In larger systems to provide greater scalability, it would more
efficient to use different interfaces, or to separate the candidate and agent functions to different
routers. Figure 4-8 shows the Auto-RP mechanism.
Figure 4-8 Auto-RP Overview
The two multicast groups for Auto-RP information are advertised via dense mode in the sparse-
dense mode of interface operation. This flooding of message allows automatic propagation to the
downstream routers. As mentioned earlier, some operating systems do not support dense mode. How
can RP information be propagated in a sparse-mode–only environment? You can use “listen”
configuration commands in global configuration mode to cause IP multicast traffic for the two Auto-
RP groups of 224.0.1.39 and 224.0.1.40 to be protocol independent multicast (PIM) dense-mode
flooded across interfaces operating in PIM sparse-mode.
The Auto-RP mapping agent uses the following logic to build the RP-cache:
When there is a tie between two candidate RPs, the RP with the highest IP address wins the tie.
Two candidate RPs contest where one group is a subset of another, but the RPs are different. Both
will be sent in the discovery RP-cache.
Auto-RP is best-suited in a multicast scoped environment. The Auto-RP message has an inbuilt
time to live (TTL) and various other boundary features that make it best-suited in scoped multicast
environments. A scoped multicast domain has its own RP with a group address assigned, which
makes it a separate PIM domain.
Table 4-2 outlines some items to remember using Auto-RP:
Table 4-2 Auto-RP Feature Considerations
Table 4-3 outlines the IOS/XE, IOS XR, and NX-OS mapping agent commands.
Table 4-3 Mapping Agent Commands for IOS/XE, IOS XR, and NX-OS
Table 4-4 outlines the IOS/XE, IOS XR, and NX-OS Candidate RP commands.
Table 4-4 Candidate RP Commands for IOS/XE, IOS XR, and NX-OS
Table 4-5 outlines the IOS/XE, IOS XR, and NX-OS Auto-RP Listener commands.
Table 4-5 Auto-RP Listener Commands for IOS/XE, IOS XR, and NX-OS
Note
With NX-OS, use the listen keyword to process the Auto-RP message and use the forward
keyword to send the Auto-RP message to next-downstream routers.
Figures 4-9 through 4-11 illustrate a typical deployment example on how to configure Auto-RP in
IOS, IOS-XR and NX-OS. In this example, R1 and R2 are the candidate RPs and mapping agents, a
common deployment practice.
Table 4-6 BSR Configuration Commands for IOS/XE, IOS XR, and NX-OS
Table 4-7 outlines the IOS/XE, IOS XR, and NX-OS commands for configuring the BSR candidate
RP.
Table 4-7 Candidate RP Configuration Commands for IOS/XE, IOS XR, and NX-OS
Figures 4-13 through 4-15 illustrate a typical deployment example on how to configure BSR in
IOS, IOS-XR and NX-OS. In this example, R1 and R2 are the candidate RPs and BSR routers, a
common deployment practice.
Sample Configuration: BSR in IOS
Figure 4-13 shows the topology for the following sample configuration of BSR in IOS.
Figure 4-13 IOS BSR Configuration Topology
Example 4-10 shows the BSR RP configuration for Router R1 and R2.
Example 4-10 Configuring BSR RP for R1 and R2 in IOS
Note
Remember, because BSR uses the local PIM multicast group (224.0.0.13) no additional
configuration is required for downstream routers to process BSR updates. The downstream router
will receive the BSR mapping advertisements, process the update, and update any group mappings as
necessary. Multicast group state entries will, of course, use the RP(s) in the mapping as processed.
Sample Configuration: BSR in IOS-XR
Figure 4-14 shows the topology for the following sample configuration of BSR in IOS-XR.
Figure 6-15 Deriving the RP Address from the Multicast Group Address
When using embedded RP, there is no need to configure routers with RP information or to use a
dynamic protocol to perform RP mapping and propagation. Instead, multicast-enabled devices can
extract and use the RP information from the group address, as shown above. That means a large
number of RPs can be deployed anywhere, either local to the PIM domain or on the Internet for inter-
domain multicast routing (multicast routing that typically occurs between two or more organizations
or the Internet). No additional protocol or PIM6 changes are required to support embedded RP, it is
built-in. Cisco router operating systems are programmed to automatically derive the RP information,
making additional configuration unnecessary.
Any routers acting as a physical RP for an embedded group must have a statically configured RP
address using an address assigned to one interface on the RP router. Routers automatically search for
embedded RP group addresses in MLD reports or PIM messages and data packets coming from the
source. When an embedded RP address is discovered, the router performs the group-to-RP mapping
using the newly learned RP address. If no physical RP exists that matches the learned address, or if
the learned RP address is not in the unicast RIB, then the router will not be able to complete the
multicast group state. Embedded RP does not allow administrators to escape the need for a physical
RP.
Also, keep in mind that a router can learn just one RP address for a multicast group using
embedded RP. That means that you cannot have redundancy with embedded RP, because you cannot
embed more than one RP address in the group. You can, however, use an Anycast RP set as the
physical RP component, so long as the derived embedded RP address always points to the Anycast
RP address configured on each router in the RP set. Also, as of this writing, embedded RP is not
compatible with bidirectional PIM.
Let us adjust our BSR RP example (shown earlier in Figure 6-12 ) to include an embedded RP,
removing the BSR configuration and replacing it with a static RP configuration. Below are the
example configurations for each router. We statically configure our RP on R1 with a new
Loopback400 interface that is assigned the IP address 1234:5678:9ABC::1/128 from our example
group FF76:0150:1234:5678:9ABC::1234 (illustrated by Figure 6-15 ). On routers R2 and R3, we
can now remove any irrelevant RP information. In fact, outside of the command ipv6 multicast-
routing , no other multicast configuration is required on R2 and R3. We will, however, add an MLD
join on R3’s Loopback0 interface to simulate a host using the embedded RP group address from the
example above, group FF76:0150:1234:5678:9ABC::1234. Figure 6-16 depicts the updated
topology.