Cisco APIC Layer 3 Networking Configuration Guide
Cisco APIC Layer 3 Networking Configuration Guide
0(1)
First Published: 2018-10-24
Last Modified: 2019-04-16
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2017–2019 Cisco Systems, Inc. All rights reserved.
CONTENTS
PREFACE Preface xv
Audience xv
New and Changed Information xv
Document Conventions xvi
Related Documentation xvii
Documentation Feedback xviii
Configuring a Route Control Protocol to Use Import and Export Controls, With the GUI 134
Configuring a Route Control Protocol to Use Import and Export Controls, With the NX-OS Style
CLI 136
Configuring a Route Control Protocol to Use Import and Export Controls, With the REST API 137
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the NX-OS Style CLI - Implicit Example
168
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the Advanced GUI 169
The APIC IGMP Snooping Function, IGMPv1, IGMPv2, and the Fast Leave Feature 215
The APIC IGMP Snooping Function and IGMPv3 215
Cisco APIC and the IGMP Snooping Querier Function 216
Guidelines and Limitations for the APIC IGMP Snooping Function 216
Configuring and Assigning an IGMP Snooping Policy 217
Configuring and Assigning an IGMP Snooping Policy to a Bridge Domain in the Advanced GUI
217
Configuring and Assigning an IGMP Snooping Policy to a Bridge Domain using the REST API
220
Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Virtual machine installation and administration
• Server administration
• Switch and network administration
Table 1: New Features and Changed Behavior in Cisco APIC Release 4.0(1)
Advertise Host Route configuration Support for advertising host route Routed Connectivity to External
configuration on a border leaf Networks, on page 21
Disable dataplane IP learning per Allows IP learning through the Dataplane IP Learning per VRF, on
VRF dataplane to be disabled on a VRF. page 177
Remote Leaf: WAN bandwidth Maintain data paths when main data Remote Leaf Switches, on page 279
usage improvement and reduced center is unreachable. Reduced
dependency on ACI main DC. WAN bandwidth used by services.
New QoS class levels QoS now supports new levels 4, 5, L3Outs QoS, on page 61
and 6 configured under global
policies, EPG, L3out, custom QoS,
and contracts.
Configure a QoS class or create a You can now configure a QoS class L3Outs QoS, on page 61
customizable QoS policy or create a custom QoS policy to
apply on an L3Out interface.
Document Conventions
Command descriptions use the following conventions:
Convention Description
bold Bold text indicates the commands and keywords that you enter literally
as shown.
Italic Italic text indicates arguments for which the user supplies the values.
variable Indicates a variable for which you supply values, in context where italics
cannot be used.
Convention Description
string A nonquoted set of characters. Do not use quotation marks around the
string or the string will include the quotation marks.
Convention Description
screen font Terminal sessions and information the switch displays are in screen font.
boldface screen font Information you must enter is in boldface screen font.
italic screen font Arguments for which you supply values are in italic screen font.
Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.
Caution Means reader be careful. In this situation, you might do something that could result in equipment damage or
loss of data.
Related Documentation
Application Policy Infrastructure Controller (APIC) Documentation
The following companion guides provide documentation for APIC:
• Cisco APIC Getting Started Guide
• Cisco APIC Basic Configuration Guide
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to apic-docfeedback@cisco.com. We appreciate your feedback.
As traffic enters the fabric, ACI encapsulates and applies policy to it, forwards it as needed across the fabric
through a spine switch (maximum two-hops), and de-encapsulates it upon exiting the fabric. Within the fabric,
ACI uses Intermediate System-to-Intermediate System Protocol (IS-IS) and Council of Oracle Protocol (COOP)
for all forwarding of endpoint to endpoint communications. This enables all ACI links to be active, equal cost
multipath (ECMP) forwarding in the fabric, and fast-reconverging. For propagating routing information
between software defined networks within the fabric and routers external to the fabric, ACI uses the
Multiprotocol Border Gateway Protocol (MP-BGP).
VXLAN in ACI
VXLAN is an industry-standard protocol that extends Layer 2 segments over Layer 3 infrastructure to build
Layer 2 overlay logical networks. The ACI infrastructure Layer 2 domains reside in the overlay, with isolated
broadcast and failure bridge domains. This approach allows the data center network to grow without the risk
of creating too large a failure domain.
All traffic in the ACI fabric is normalized as VXLAN packets. At ingress, ACI encapsulates external VLAN,
VXLAN, and NVGRE packets in a VXLAN packet. The following figure shows ACI encapsulation
normalization.
Forwarding in the ACI fabric is not limited to or constrained by the encapsulation type or encapsulation
overlay network. An ACI bridge domain forwarding policy can be defined to provide standard VLAN behavior
where required.
Because every packet in the fabric carries ACI policy attributes, ACI can consistently enforce policy in a fully
distributed manner. ACI decouples application policy EPG identity from forwarding. The following illustration
shows how the ACI VXLAN header identifies application policy within the fabric.
Figure 3: ACI VXLAN Packet Format
The ACI VXLAN packet contains both Layer 2 MAC address and Layer 3 IP address source and destination
fields, which enables efficient and scalable forwarding within the fabric. The ACI VXLAN packet header
source group field identifies the application policy endpoint group (EPG) to which the packet belongs. The
VXLAN Instance ID (VNID) enables forwarding of the packet through tenant virtual routing and forwarding
(VRF) domains within the fabric. The 24-bit VNID field in the VXLAN header provides an expanded address
space for up to 16 million unique Layer 2 segments in the same network. This expanded address space gives
IT departments and cloud providers greater flexibility as they build large multitenant data centers.
VXLAN enables ACI to deploy Layer 2 virtual networks at scale across the fabric underlay Layer 3
infrastructure. Application endpoint hosts can be flexibly placed in the data center network without concern
for the Layer 3 boundary of the underlay infrastructure, while maintaining Layer 2 adjacency in a VXLAN
overlay network.
VXLAN uses VTEP devices to map tenant end devices to VXLAN segments and to perform VXLAN
encapsulation and de-encapsulation. Each VTEP function has two interfaces:
• A switch interface on the local LAN segment to support local endpoint communication through bridging
• An IP interface to the transport IP network
The IP interface has a unique IP address that identifies the VTEP device on the transport IP network known
as the infrastructure VLAN. The VTEP device uses this IP address to encapsulate Ethernet frames and transmit
the encapsulated packets to the transport network through the IP interface. A VTEP device also discovers the
remote VTEPs for its VXLAN segments and learns remote MAC Address-to-VTEP mappings through its IP
interface.
The VTEP in ACI maps the internal tenant MAC or IP address to a location using a distributed mapping
database. After the VTEP completes a lookup, the VTEP sends the original data packet encapsulated in
VXLAN with the destination address of the VTEP on the destination leaf switch. The destination leaf switch
de-encapsulates the packet and sends it to the receiving host. With this model, ACI uses a full mesh, single
hop, loop-free topology without the need to use the spanning-tree protocol to prevent loops.
The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP
network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on
the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP
as the destination IP address.
The following figure shows how routing within the tenant is done.
For each tenant VRF in the fabric, ACI assigns a single L3 VNID. ACI transports traffic across the fabric
according to the L3 VNID. At the egress leaf switch, ACI routes the packet from the L3 VNID to the VNID
of the egress subnet.
Traffic arriving at the fabric ingress that is sent to the ACI fabric default gateway is routed into the Layer 3
VNID. This provides very efficient forwarding in the fabric for traffic routed within the tenant. For example,
with this model, traffic between 2 VMs belonging to the same tenant, on the same physical host, but on
different subnets, only needs to travel to the ingress switch interface before being routed (using the minimal
path cost) to the correct destination.
To distribute external routes within the fabric, ACI route reflectors use multiprotocol BGP (MP-BGP). The
fabric administrator provides the autonomous system (AS) number and specifies the spine switches that
become route reflectors.
• Fibre Channel domain profiles (fcDomP) are used to connect Fibre Channel VLANs and VSANs.
A domain is configured to be associated with a VLAN pool. EPGs are then configured to use the VLANs
associated with a domain.
Note EPG port and VLAN configurations must match those specified in the domain infrastructure configuration
with which the EPG associates. If not, the APIC will raise a fault. When such a fault occurs, verify that the
domain infrastructure configuration matches the EPG port and VLAN configurations.
The routes that are learned through peering are sent to the spine switches. The spine switches act as route
reflectors and distribute the external routes to all of the leaf switches that have interfaces that belong to the
same tenant. These routes are longest prefix match (LPM) summarized addresses and are placed in the leaf
switch's forwarding table with the VTEP IP address of the remote leaf switch where the external router is
connected. WAN routes have no forwarding proxy. If the WAN routes do not fit in the leaf switch's forwarding
table, the traffic is dropped. Because the external router is not the default gateway, packets from the tenant
endpoints (EPs) are sent to the default gateway in the ACI fabric.
Route Import and Export, Route Summarization, and Route Community Match
Subnet route export or import configuration options can be specified according to the scope and aggregation
options described below.
For routed subnets, the following scope options are available:
• Export Route Control Subnet—Controls the export route direction.
• Import Route Control Subnet—Controls the import route direction.
Note Import route control is supported for BGP and OSPF, but not EIGRP.
• External Subnets for the External EPG (Security Import Subnet)—Specifies which external subnets have
contracts applied as part of a specific External Network Instance Profile (l3extInstP). For a subnet
under the l3extInstP to be classified as an External EPG, the scope on the subnet should be set to
"import-security". Subnets of this scope determine which IP addresses are associated with the l3extInstP.
Once this is determined, contracts determine with which other EPGs that external subnet is allowed to
communicate. For example, when traffic enters the ACI switch on the Layer 3 External Outside Network
(L3extOut), a lookup occurs to determine which source IP addresses are associated with the l3extInstP.
This action is performed based on Longest Prefix Match (LPM) so that more specific subnets take
precedence over more general subnets.
• Shared Route Control Subnet— In a shared service configuration, only subnets that have this property
enabled will be imported into the consumer EPG Virtual Routing and Forwarding (VRF). It controls the
route direction for shared services between VRFs.
• Shared Security Import Subnet—Applies shared contracts to imported subnets. The default specification
is External Subnets for the External EPG.
Routed subnets can be aggregated. When aggregation is not set, the subnets are matched exactly. For example,
if 11.1.0.0/16 is the subnet, then the policy will not apply to a 11.1.1.0/24 route, but it will apply only if the
route is 11.1.0.0/16. However, to avoid a tedious and error prone task of defining all the subnets one by one,
a set of subnets can be aggregated into one export, import or shared routes policy. At this time, only 0/0
subnets can be aggregated. When 0/0 is specified with aggregation, all the routes are imported, exported, or
shared with a different VRF, based on the selection option below:
• Aggregate Export—Exports all transit routes of a VRF (0/0 subnets).
• Aggregate Import—Imports all incoming routes of given L3 peers (0/0 subnets).
Note Aggregate import route control is supported for BGP and OSPF, but not for
EIGRP.
• Aggregate Shared Routes—If a route is learned in one VRF but needs to be advertised to another VRF,
the routes can be shared by matching the subnet exactly, or can be shared in an aggregate way according
to a subnet mask. For aggregate shared routes, multiple subnet masks can be used to determine which
specific route groups are shared between VRFs. For example, 10.1.0.0/16 and 12.1.0.0/16 can be specified
to aggregate these subnets. Or, 0/0 can be used to share all subnet routes across multiple VRFs.
Note Routes shared between VRFs function correctly on Generation 2 switches (Cisco Nexus N9K switches with
"EX" or "FX" on the end of the switch model name, or later; for example, N9K-93108TC-EX). On Generation
1 switches, however, there may be dropped packets with this configuration, because the physical ternary
content-addressable memory (TCAM) tables that store routes do not have enough capacity to fully support
route parsing.
Route summarization simplifies route tables by replacing many specific addresses with an single address. For
example, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24 are replaced with 10.1.0.0/16. Route summarization policies
enable routes to be shared efficiently among border leaf switches and their neighbor leaf switches. BGP,
OSPF, or EIGRP route summarization policies are applied to a bridge domain or transit subnet. For OSPF,
inter-area and external route summarization are supported. Summary routes are exported; they are not advertised
within the fabric. In the example above, when a route summarization policy is applied, and an EPG uses the
10.1.0.0/16 subnet, the entire range of 10.1.0.0/16 is shared with all the neighboring leaf switches.
Note When two L3extOut policies are configured with OSPF on the same leaf switch, one regular and another for
the backbone, a route summarization policy configured on one L3extOut is applied to both L3extOut policies
because summarization applies to all areas in the VRF.
As illustrated in the figure below, route control profiles derive route maps according to prefix-based and
community-based matching.
Figure 7: Route Community Matching
The route control profile (rtctrtlProfile) specifies what is allowed. The Route Control Context specifies
what to match, and the scope specifies what to set. The subject profile contains the community match
specifications, which can be used by multiple l3extOut instances. The subject profile (SubjP) can contain
multiple community terms each of which contains one or more community factors (communities). This
arrangement enables specifying the following boolean operations:
• Logical or among multiple community terms
• Logical and among multiple community factors
For example, a community term called northeast could have multiple communities that each include many
routes. Another community term called southeast could also include many different routes. The administrator
could choose to match one, or the other, or both. A community factor type can be regular or extended. Care
should be taken when using extended type community factors, to ensure there are no overlaps among the
specifications.
The scope portion of the route control profile references the attribute profile (rtctrlAttrP) to specify what
set-action to apply, such as preference, next hop, community, and so forth. When routes are learned from an
l3extOut, route attributes can be modified.
The figure above illustrates the case where an l3extOut contains a rtctrtlProfile. A rtctrtlProfile can
also exist under the tenant. In this case, the l3extOut has an interleak relation policy (L3extRsInterleakPol)
that associates it with the rtctrtlProfile under the tenant. This configuration enables reusing the
rtctrtlProfile for multiple l3extOut connections. It also enables keeping track of the routes the fabric
learns from OSPF to which it gives BGP attributes (BGP is used within the fabric). A rtctrtlProfile defined
under an L3extOut has a higher priority than one defined under the tenant.
The rtctrtlProfile has two modes: combinable, and global. The default combinable mode combines
pervasive subnets (fvSubnet) and external subnets (l3extSubnet) with the match/set mechanism to render
the route map. The global mode applies to all subnets within the tenant, and overrides other policy attribute
settings. A global rtctrtlProfile provides permit-all behavior without defining explicit (0/0) subnets. A
global rtctrtlProfile is used with non-prefix based match rules where matching is done using different
subnet attributes such as community, next hop, and so on. Multiple rtctrtlProfile policies can be configured
under a tenant.
rtctrtlProfile policies enable enhanced default import and default export route control. Layer 3 Outside
networks with aggregated import or export routes can have import/export policies that specify supported
default-export and default–import, and supported 0/0 aggregation policies. To apply a rtctrtlProfile policy
on all routes (inbound or outbound), define a global default rtctrtlProfile that has no match rules.
Note While multiple l3extOut connections can be configured on one switch, all Layer 3 outside networks configured
on a switch must use the same rtctrtlProfile because a switch can have only one route map.
The protocol interleak and redistribute policy controls externally learned route sharing with ACI fabric BGP
routes. Set attributes are supported. Such policies are supported per L3extOut, per node, or per VRF. An
interleak policy applies to routes learned by the routing protocol in the L3extOut. Currently, interleak and
redistribute policies are supported for OSPF v2 and v3. A route control policy rtctrtlProfile has to be
defined as global when it is consumed by an interleak policy.
• The routes that are learned from the OSPF process on the border leaf are redistributed into BGP for the
tenant VRF and they are imported into MP-BGP on the border leaf.
• Import route control is supported for BGP and OSPF, but not for EIGRP.
Note When a subnet for a bridge domain/EPG is set to Advertise Externally, the subnet is programmed as a static
route on a border leaf. When the static route is advertised, it is redistributed into the EPG's Layer 3 outside
network routing protocol as an external network, not injected directly into the routing protocol.
ACI supports the VRF-lite implementation when connecting to the external routers. Using sub-interfaces, the
border leaf can provide Layer 3 outside connections for the multiple tenants with one physical interface. The
VRF-lite implementation requires one protocol session per tenant.
Within the ACI fabric, Multiprotocol BGP (MP-BGP) is implemented between the leaf and the spine switches
to propagate the external routes within the ACI fabric. The BGP route reflector technology is deployed in
order to support a large number of leaf switches within a single fabric. All of the leaf and spine switches are
in one single BGP Autonomous System (AS). Once the border leaf learns the external routes, it can then
redistribute the external routes of a given VRF to an MP-BGP address family VPN version 4 or VPN version
6. With address family VPN version 4, MP-BGP maintains a separate BGP routing table for each VRF. Within
MP-BGP, the border leaf advertises routes to a spine switch, that is a BGP route reflector. The routes are then
propagated to all the leaves where the VRFs (or private network in the APIC GUI’s terminology) are
instantiated.
The External Layer 3 Outside connections are supported on the following interfaces:
• Layer 3 Routed Interface
• Sub-interface with 802.1Q tagging - With sub-interface, the same physical interface can be used to
provide a Layer 2 outside connection for multiple private networks.
• Switched Virtual Interface (SVI) - With an SVI interface, the same physical interface that supports Layer
2 and Layer 3 and the same physical interface can be used for a Layer 2 outside connection and a Layer
3 outside connection.
The managed objects that are used for the L3Outside connections are:
• External Layer 3 Outside (L3ext): Routing protocol options (OSPF area type, area, EIGRP AS, BGP),
private network, External Physical domain.
• Logical Node Profile: Profile where one or more nodes are defined for the External Layer 3 Outside
connections. The router-IDs and the loopback interface configuration is defined in the profile.
Note Use the same router-ID for the same node across multiple External Layer 3 Outside
connections.
Note Within a single L3Out, a node can only be part of one Logical Node Profile.
Configuring the node to be a part of multiple Logical Node Profiles in a single
L3Out might result in unpredictable behavior, such as a loopback address being
pushed from one Logical Node Profile but not from the other. Use additional path
bindings under the existing Logical Interface Profiles or create a new Logical
Interface Profile under the existing Logical Node Profile instead.
• Logical Interface Profile: IP interface configuration for IPv4 and IPv6 interfaces. It is supported on the
Route Interfaces, Routed Sub-Interfaces, and SVIs. The SVIs can be configured on physical ports,
port-channels or VPCs.
• OSPF Interface Policy: Includes details such as OSPF Network Type and priority.
• EIGRP Interface Policy: Includes details such as Timers and split horizon.
• BGP Peer Connectivity Profile: The profile where most BGP peer settings, remote-as, local-as, and BGP
peer connection options are configured. The BGP peer connectivity profile can be associated with the
logical interface profile or the loopback interface under the node profile. This determines the update-source
configuration for the BGP peering session.
• External Network Instance Profile (EPG) (l3extInstP): The external EPG is also referred to as the prefix
based EPG or InstP. The import and export route control policies, security import polices, and contract
associations are defined in this profile. Multiple external EPGs can be configured under a single L3Out.
Multiple external EPGs may be used when a different route or a security policy is defined on a single
External Layer 3 Outside connections. An external EPG or multiple external EPGs combine into a
route-map. The import/export subnets defined under the external EPG associate to the IP prefix-list match
clauses in the route-map. The external EPG is also where the import security subnets and contracts are
associated. This is used to permit or drop traffic for this L3out.
• Action Rules Profile: The action rules profile is used to define the route-map set clauses for the L3Out.
The supported set clauses are the BGP communities (standard and extended), Tags, Preference, Metric,
and Metric type.
• Route Control Profile: The route-control profile is used to reference the action rules profile(s). This can
be an ordered list of action rules profiles. The Route Control Profile can be referenced by a tenant BD,
BD subnet, external EPG, or external EPG subnet.
There are additional protocol settings for BGP, OSPF, and EIGRP L3Outs. These settings are configured per
tenant in the ACI Protocol Policies section in the GUI.
Note When configuring policy enforcement between external EPGs (transit routing case), you must configure the
second external EPG (InstP) with the default prefix 0/0 for export route control, aggregate export, and external
security. In addition, the preferred group must be excluded, and you must use an any contract (or desired
contract) between the transit InstPs.
In either case, the configuration should be considered read-only in the incompatible UI.
Note Except for the procedures in the Configuring Layer 3 External Connectivity Using the Named Mode section,
this guide describes Implicit mode procedures.
• Layer 3 external network objects (l3extOut) created using the Implicit mode CLI procedures are identified
by names starting with “__ui_” and are marked as read-only in the GUI. The CLI partitions these
external-l3 networks by function, such as interfaces, protocols, route-map, and EPG. Configuration
modifications performed through the REST API can break this structure, preventing further modification
through the CLI.
For the steps to remove such objects, see Troubleshooting Unwanted _ui_ Objects in the APIC Troubleshooting
Guide.
Controls Enabled for Subnets Configured under the L3Out Network Instance
Profile
The following controls can be enabled for the subnets that are configured under the L3Out Network Instance
Profile.
Export Route Control Controls which external networks Specific match (prefix and prefix
are advertised out of the fabric length).
using route-maps and IP prefix
lists. An IP prefix list is created on
the BL switch for each subnet that
is defined. The export control
policy is enabled by default and is
supported for BGP, EIGRP, and
OSPF.
Import Route Control Controls the subnets that are Specific match (prefix and prefix
allowed into the fabric. Can include length) .
set and match rules to filter routes.
Supported for BGP and OSPF, but
not for EIGRP. If you enable the
import control policy for an
unsupported protocol, it is
automatically ignored. The import
control policy is not enabled by
default, but you can enable it on the
Create Routed Outside panel. On
the Identity tab, enable Route
Control Enforcement: Import.
Security Import Subnet Used to permit the packets to flow Uses the ACL match prefix or
between two prefix-based EPGs. wildcard match rules.
Implemented with ACLs.
Aggregate Export Used to allow all prefixes to be Only supported for 0.0.0.0/0 subnet
advertised to the external peers. (all prefixes).
Implemented with the 0.0.0.0/ le 32
IP prefix-list.
Aggregate Import Used to allow all prefixes that are Only supported for the 0.0.0.0/0
inbound from an external BGP subnet (all prefixes).
peer. Implemented with the
0.0.0.0/0 le 32 IP prefix-list.
You may prefer to advertise all the transit routes out of an L3Out connection. In this case, use the aggregate
export option with the prefix 0.0.0.0/0. Using this aggregate export option creates an IP prefix-list entry (permit
0.0.0.0/0 le 30) that the APIC system uses as a match clause in the export route-map. Use the show route-map
<outbound route-map> and show ip prefix-list <match-clause> commands to view the output.
If you enable aggregate shared routes, if a route learned in one VRF must be advertised to another VRF, the
routes can be shared by matching the subnet exactly, or they can be shared by using an aggregate subnet mask.
Multiple subnet masks can be used to determine which specific route groups are shared between VRFs. For
example, 10.1.0.0/16 and 12.1.0.0/16 can be specified to aggregate these subnets. Or, 0/0 can be used to share
all subnet routes across multiple VRFs.
Note Routes shared between VRFs function correctly on Generation 2 switches (Cisco Nexus N9K switches with
"EX" or "FX" on the end of the switch model name, or later; for example, N9K-93108TC-EX). On Generation
1 switches, however, there may be dropped packets with this configuration, because the physical ternary
content-addressable memory (TCAM) tables that store routes do not have enough capacity to fully support
route parsing.
1. Prerequisites
• Ensure that you have read/write access privileges to the infra security domain.
• Ensure that the target leaf switches with the necessary interfaces are available.
Layer 3 Prerequisites
Before you begin to perform the tasks in this guide, complete the following:
• Ensure that the ACI fabric and the APIC controllers are online, and the APIC cluster is formed and
healthy—For more information, see Cisco APIC Getting Started Guide, Release 2.x.
• Ensure that fabric administrator accounts for the administrators that will configure Layer 3 networks are
available—For instructions, see the User Access, Authentication, and Accounting and Management
chapters in Cisco APIC Basic Configuration Guide.
• Ensure that the target leaf and spine switches (with the necessary interfaces) are available—For more
information, see Cisco APIC Getting Started Guide, Release 2.x.
For information about installing and registering virtual switches, see Cisco ACI Virtualization Guide.
• Configure the tenants, bridge domains, VRFs, and EPGs (with application profiles and contracts) that
will consume the Layer 3 networks—For instructions, see the Basic User Tenant Configuration chapter
in Cisco APIC Basic Configuration Guide.
• Configure NTP, DNS Service, and DHCP Relay policies—For instructions, see the Provisioning Core
ACI Fabric Services chapter in Cisco APIC Basic Configuration Guide, Release 2.x.
Caution If you install 1 Gigabit Ethernet (GE) or 10GE links between the leaf and spine switches in the fabric, there
is risk of packets being dropped instead of forwarded, because of inadequate bandwidth. To avoid the risk,
use 40GE or 100GE links between the leaf and spine switches.
• Unicast Routing: If this setting is enabled and a subnet address is configured, the fabric provides the
default gateway function and routes the traffic. Enabling unicast routing also instructs the mapping
database to learn the endpoint IP-to-VTEP mapping for this bridge domain. The IP learning is not
dependent upon having a subnet configured under the bridge domain.
• Subnet Address: This option configures the SVI IP addresses (default gateway) for the bridge domain.
• Limit IP Learning to Subnet: This option is similar to a unicast reverse-forwarding-path check. If this
option is selected, the fabric will not learn IP addresses from a subnet other than the one configured on
the bridge domain.
Caution Enabling Limit IP Learning to Subnet is disruptive to the traffic in the bridge domain.
Note For guidelines and cautions for configuring and maintaining Layer 3 outside connections, see Guidelines for
Routed Connectivity to Outside Networks, on page 24.
For information about the types of L3Outs, see External Layer 3 Outside Connection Types, on page 11.
A Layer 3 external outside network (l3extOut object) includes the routing protocol options (BGP, OSPF, or
EIGRP or supported combinations) and the switch-specific and interface-specific configurations. While the
l3extOut contains the routing protocol (for example, OSPF with its related Virtual Routing and Forwarding
(VRF) and area ID), the Layer 3 external interface profile contains the necessary OSPF interface details. Both
are needed to enable OSPF.
The l3extInstP EPG exposes the external network to tenant EPGs through a contract. For example, a tenant
EPG that contains a group of web servers could communicate through a contract with the l3extInstP EPG
according to the network configuration contained in the l3extOut. The outside network configuration can
easily be reused for multiple nodes by associating the nodes with the L3 external node profile. Multiple nodes
that use the same profile can be configured for fail-over or load balancing. Also, a node can be added to
multiple l3extOuts resulting in VRFs that are associated with the l3extOuts also being deployed on that node.
For scalability information, refer to the current Verified Scalability Guide for Cisco ACI.
• In the case of Remote Leaf, if EPs are locally learned in the Remote Leaf, they are then advertised only
through a L3out deployed in Remote Leaf switches in same POD.
• EPs/Host routes in a Remote Leaf are not advertised out through Border Leaf switches in main POD or
another POD.
• EPs/Host routes in the main POD are not advertised through L3out in Remote Leaf switches of same
POD or another POD.
• The BD subnet must have the Advertise Externally option enabled.
• The BD must be associated to an L3out or the L3out must have explicit route-map configured matching
BD subnets.
• There must be a contract between the EPG in the specified BD and the External EPG for the L3out.
Note If there is no contract between the BD/EPG and the External EPG the BD subnet
and host routes will not be installed on the border leaf.
• Advertise Host Route is supported for shared services. For example: epg1/BD1 deployed is in VRF-1
and L3out in another VRF-2. By providing shared contract between EPG and L3out host routes are pulled
from one VRF-1 to another VRF-2.
• When Advertise Host Route is enabled on BD custom tag cannot be set on BD Subnet using route-map.
• When Advertise Host Route is enabled on a BD and the BD is associated with an L3Out, BD subnet is
marked public. If there's a rogue EP present under the BD, that EP is advertised out on L3Out.
Ingress-based policy enforcement Starting with Cisco APIC release 1.2(1), ingress-based policy
enforcement enables defining policy enforcement for Layer 3 Outside
(L3Out) traffic for both egress and ingress directions. The default is
ingress. During an upgrade to release 1.2(1) or higher, existing L3Out
configurations are set to egress so that the behavior is consistent with
the existing configuration. You do not need any special upgrade
sequence. After the upgrade, you change the global property value
to ingress. When it has been changed, the system reprograms the
rules and prefix entries. Rules are removed from the egress leaf and
installed on the ingress leaf, if not already present. If not already
configured, an Actrl prefix entry is installed on the ingress leaf.
Direct server return (DSR), and attribute EPGs require ingress based
policy enforcement. vzAny and taboo contracts ignore ingress based
policy enforcement. Transit rules are applied at ingress.
Bridge Domains with L3Outs A bridge domain in a tenant can contain a public subnet that is
advertised through an l3extOut provisioned in the common tenant.
Bridge domain route advertisement For When both OSPF and EIGRP are enabled on the same VRF on a
OSPF and EIGRP node and if the bridge domain subnets are advertised out of one of
the L3Outs, it will also get advertised out of the protocol enabled on
the other L3Out.
For OSPF and EIGRP, the bridge domain route advertisement is per
VRF and not per L3Out. The same behavior is expected when
multiple OSPF L3Outs (for multiple areas) are enabled on the same
VRF and node. In this case, the bridge domain route will be
advertised out of all the areas, if it is enabled on one of them.
BGP Maximum Prefix Limit Starting with Cisco APIC release 1.2(1x), tenant policies for BGP
l3extOut connections can be configured with a maximum prefix
limit, that enables monitoring and restricting the number of route
prefixes received from a peer. Once the maximum prefix limit has
been exceeded, a log entry is recorded, and further prefixes are
rejected. The connection can be restarted if the count drops below
the threshold in a fixed interval, or the connection is shut down. Only
one option can be used at a time. The default setting is a limit of
20,000 prefixes, after which new prefixes are rejected. When the
reject option is deployed, BGP accepts one more prefix beyond the
configured limit, before the APIC raises a fault.
MTU Cisco ACI does not support IP fragmentation. Therefore, when you
configure Layer 3 Outside (L3Out) connections to external routers,
or multipod connections through an Inter-Pod Network (IPN), it is
critical that the interface MTU is set appropriately on both ends of
a link. On some platforms, such as Cisco ACI, Cisco NX-OS, and
Cisco IOS, the configurable MTU value does not take into account
the ethernet headers (matching IP MTU, and excluding the 14-18
ethernet header size), while other platforms, such as IOS-XR, include
the ethernet header in the configured MTU value. A configured value
of 9000 results in a max IP packet size of 9000 bytes in Cisco ACI,
Cisco NX-OS, and Cisco IOS, but results in a max IP packet size of
8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant
configuration guides.
Cisco highly recommends that you test the MTU with CLI-based
commands. For example, on the Cisco NX-OS CLI, use a command
such as ping 1.1.1.1 df-bit packet-size 9000
source-interface ethernet 1/1.
Layer 4 to Layer 7 When you are using a multinode service graph, you must have the
two EPGs in separate VRF instances. For these functions, the system
must do a Layer 3 lookup, so the EPGs must be in separate VRFs.
This limitation follows legacy service insertion, based on Layer 2
and Layer 3 lookups.
QoS for L3Outs To configure QoS policies for an L3Out and enable the policies to
be enforced on the BL switch where the L3Out is located, use the
following guidelines:
• The VRF Policy Control Enforcement Direction must be set
toEgress.
• The VRF Policy Control Enforcement Preference must be set
to Enabled.
• When configuring the contract that controls communication
between the EPGs using the L3Out, include the QoS class or
Target DSCP in the contract or subject of the contract.
In this example, the Cisco ACI fabric has 3 leaf switches and two spine switches, that are controlled by an
APIC cluster. The nonborder leaf switches (101 and 102) are connected to a web server and a database server.
The border leaf switch (103) has an L3Out on it providing connection to a router and thus to the Internet. The
goal of this example is to enable the web server to communicate through the L3Out on the border leaf switch
to an endpoint (EP) on the Internet.
In this example, the tenant that is associated with the L3Out is t1, with VRF v1, and L3Out external EPG,
extnw1.
Before configuring an L3Out, configure the node, port, functional profile, AEP, and Layer 3 domain. You
must also configure the spine switches 104 and 105 as BGP route reflectors.
Configuring the L3Out includes defining the following components:
1. Tenant and VRF
2. Node and interface on leaf 103
3. Primary routing protocol (used to exchange routes between border leaf switch and external routers; in
this example, BGP)
4. Connectivity routing protocol (provides reachability information for the primary protocol; in this example,
OSPF)
5. External EPG
6. Route map
7. Bridge domain
8. At least one application EPG on node 101
9. Filters and contracts
10. Associate the contracts with the EPGs
The following table lists the names that are used in the examples in this chapter:
Tenant t1 t1
VRF v1 v1
Configuring Layer 3 Outside for Tenant Networks Using the REST API
The external routed network that is configured in the example can also be extended to support both IPv4 and
IPv6. Both IPv4 and IPv6 routes can be advertised to and learned from the external routed network. To
configure an L3Out for a tenant network, send a post with XML such as the example.
This example is broken into steps for clarity. For a merged example, see REST API Example: L3Out, on page
32.
For an XML example of these prerequisites, see REST API Example: L3Out Prerequisites, on page 31.
Procedure
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
</l3extOut>
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="match-rule1"/>
</rtctrlCtxP>
</rtctrlProfile>
<l3extInstP name="extnw1">
<l3extSubnet ip="20.20.20.0/24" scope="import-security"/>
<l3extRsInstPToProfile direction='export' tnRtctrlProfileName="rp1"/>
<fvRsProv tnVzBrCPName="httpCtrct"/>
</l3extInstP>
</l3extOut>
</fvTenant>
Step 8 This example creates filters and contracts to enable the EPGs to communicate. The external EPG and the
application EPG are already associated with the contract httpCtrct as provider and consumer respectively.
The scope of the contract (where it is applied) can be within the application profile, the tenant, the VRF, or
it can be used globally (throughout the fabric). In this example, the scope is the VRF (context).
Example:
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
</infraAccPortGrp>
</infraFuncP>
<infraAttEntityP name="aeP1">
<infraRsDomP tDn="uni/phys-dom1"/>
<infraRsDomP tDn="uni/l3dom-dom1/>
</infraAttEntityP>
<fvnsVlanInstP name="vlan-1024-2048" allocMode="static">
<fvnsEncapBlk name="encap" from="vlan-1024" to="vlan-2048" status="created"/>
</fvnsVlanInstP>
</infraInfra>
<physDomP dn="uni/phys-dom1" name="dom1">
<infraRsVlanNs tDn="uni/infra/vlanns-[vlan-1024-2048]-static"/>
</physDomP>
<l3extDomP name="dom1">
<infraRsVlanNs tDn="uni/infra/vlanns-[vlan-1024-2048]-static" />
</l3extDomP>
</polUni>
</fvAEPg>
</fvAp>
<l3extOut name="l3out1">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep1">
<l3extRsNodeL3OutAtt rtrId="11.11.11.103" tDn="topology/pod-1/node-103"/>
<l3extLIfP name="ifp1">
<l3extRsPathL3OutAtt addr="12.12.12.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-103/pathep-[eth1/3]"/>
</l3extLIfP>
<bgpPeerP addr="15.15.15.2">
<bgpAsP asn="100"/>
</bgpPeerP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
<bgpExtP/>
<ospfExtP areaId="0.0.0.0" areaType="regular"/>
<l3extInstP name="extnw1" >
<l3extSubnet ip="20.20.20.0/24" scope="import-security"/>
<l3extRsInstPToProfile direction="export" tnRtctrlProfileName="rp1"/>
<fvRsProv tnVzBrCPName="httpCtrct"/>
</l3extInstP>
<rtctrlProfile name="rp1">
<rtctrlCtxP name="ctxp1" action="permit" order="0">
<rtctrlScope>
<rtctrlRsScopeToAttrP tnRtctrlAttrPName="attrp1"/>
</rtctrlScope>
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="match-rule1"/>
</rtctrlCtxP>
</rtctrlProfile>
</l3extOut>
<rtctrlSubjP name="match-rule1">
<rtctrlMatchRtDest ip="200.3.2.0/24"/>
</rtctrlSubjP>
<rtctrlAttrP name="attrp1">
<rtctrlSetASPath criteria="prepend">
<rtctrlSetASPathASN asn="100" order="2"/>
<rtctrlSetASPathASN asn="200" order="1"/>
</rtctrlSetASPath>
</rtctrlAttrP>
<vzFilter name='http-filter'>
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
</fvTenant>
</polUni>
Configuring a Layer 3 Outside for Tenant Networks Using the NX-OS Style CLI
These steps describe how to configure a Layer 3 outside network for tenant networks. This example shows
how to deploy a node and L3 port for tenant VRF external L3 connectivity using the NX-OS CLI.
This example is broken into steps for clarity. For a merged example, see NX-OS Style CLI Example: L3Out,
on page 37.
For an example using the commands for these prerequisites, see NX-OS Style CLI Example: L3Out
Prerequisites, on page 37.
Procedure
Example:
apic1(config)# leaf 103
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# router-id 11.11.11.103
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member tenant t1 vrf v1
apic1(config-leaf-if)# ip address 12.12.12.3/24
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config-leaf-bgp-vrf)#exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
apic1(config)# bgp-fabric
apic1(config-bgp-fabric)# asn 100
apic1(config-bgp-fabric)# route-reflector spine 104,105
apic1(config-leaf-ospf)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 20.20.20.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 103
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# external-l3 epg extnw1
apic(config-leaf-vrf)# exit
apic1(config-leaf)# template route group match-rule1 tenant t1
apic1(config-route-group)# ip prefix permit 200.3.2.0/24
apic1(config-route-group)# exit
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# route-map rp1
apic1(config-leaf-vrf-route-map)# match route group match-rule1 order 0
apic1(config-leaf-vrf-route-map-match)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# router bgp 100
apic1(config-leaf-bgp)# vrf member tenant t1 vrf v1
apic1(config-leaf-bgp-vrf)# neighbor 15.15.15.2
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp1 in
apic1(config-leaf-bgp-vrf-neighbor)#exit
apic1(config-leaf-bgp-vrf)# exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# bridge-domain bd1
apic1(config-tenant-bd)# vrf member v1
apic1(config-tenant-bd)# exit
apic1(config-tenant)# interface bridge-domain bd1
apic1(config-tenant-interface)# ip address 44.44.44.1/24 scope public
apic1(config-tenant-interface)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# route-map map1
apic1(config-leaf-vrf-route-map)# match bridge-domain bd1 tenant t1
apic1(config-leaf-vrf-route-map-match)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# application app1
apic1(config-tenant-app)# epg epg1
apic1(config-tenant-app-epg)# bridge-domain member bd1
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# switchport trunk allowed vlan 2011 tenant t1 application app1 epg
epg1
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# exit
Procedure
Step 1 To create the tenant and VRF, on the menu bar, choose Tenants > Add Tenant and in the Create Tenant
dialog box, perform the following tasks:
a) In the Name field, enter the tenant name.
b) In the VRF Name field, enter the VRF name.
c) Click Submit.
Step 2 To create a bridge domain, in the Navigation pane, expand Tenant and Networking and perform the following
steps:
a) Right-click Bridge Domains and choose Create Bridge Domain.
b) In the Name field, enter a name for the bridge domain (BD).
c) (Optional) Click the box for Advertise Host Routes to enable advertisement to all deployed border
leafs.
d) In the VRF field, from the drop-down list, choose the VRF you created (v1 in this example).
e) Click Next.
f) Click the + icon on Subnets.
g) In the Gateway IP field, enter the subnet for the BD.
h) In the Scope field, choose Advertised Externally.
Add the L3 Out for Route Profile later, after you create it.
Note If Advertise Host Routes is enabled, the route-map will also match all host routes.
i) Click OK.
j) Click Next and click Finish.
Step 3 To create an application EPG, perform the following steps:
a) Right-click Application Profiles and choose Create Application Profile.
b) Enter a name for the application.
c) Click the + icon for EPGs.
d) Enter a name for the EPG.
e) From the BD drop-down list, choose the bridge domain you previously created.
f) Click Update.
g) Click Submit.
Step 4 To start creating the L3Out, on the Navigation pane, expand Tenant and Networking and perform the
following steps:
a) Right-click External Routed Networks and choose Create Routed Outside.
b) In the Name field, enter a name for the L3Out.
c) From the VRF drop-down list, choose the VRF.
d) From the External Routed Domain drop-down list, choose the external routed domain that you
previously created.
e) In the area with the routing protocol check boxes, check the desired protocols (BGP, OSPF, or EIGRP).
For the example in this chapter, choose BGP and OSPF.
Depending on the protocols you choose, enter the properties that must be set.
f) Enter the OSPF details, if you enabled OSPF.
For the example in this chapter, use the OSPF area 0 and type Regular area.
g) Click + to expand Nodes and Interfaces Protocol Profiles.
h) In the Name field, enter a name.
i) Click + to expand Nodes.
j) From the Node IDfield drop-down menu, choose the node for the L3Out.
For the topology in these examples, use node 103.
k) In the Router ID field, enter the router ID (IPv4 or IPv6 address for the router that is connected to the
L3Out).
l) (Optional) You can configure another IP address for a loopback address. Uncheck Use Router ID as
Loopback Address, expand Loopback Addresses, enter an IP address, and click Update.
m) In the Select Node dialog box, click OK.
Step 5 If you enabled BGP, click the + icon to expand BGP Peer Connectivity Profiles and perform the following
steps:
a) In the Peer Address field, enter the BGP peer address.
b) In the Local-AS Number field, enter the BGP AS number.
For the example in this chapter, use the BGP peer address 15.15.15.2 and ASN number 100.
c) Click OK.
Step 6 Click + to expand Interface Profiles (OSPF Interface Profiles if you enabled OSPF), and perform the
following actions:
a) In the Name field, enter a name for the interface profile.
b) Click Next.
c) In the Protocol Profiles dialog box, in the OSPF Policy field, choose an OSPF policy.
d) Click Next.
e) Click the + icon to expand Routed Interfaces.
f) In the Select Routed Interface dialog box, from the Node drop-down list, choose the node.
g) From the Path drop-down list, choose the interface path.
h) In the IPv4 Primary/IPv6 Preferred Address field, enter the IP address and network mask for the
interface.
Note To configure IPv6, you must enter the link-local address in the Link-local Address field.
c) (Optional) In the Route Summarization Policy field, from the drop-down list, choose an existing route
summarization policy or create a new one as desired. Also click the check box for Export Route Control
Subnet.
The type of route summarization policy depends on the routing protocols that are enabled for the L3Out.
d) Click the + icon to expand Route Control Profile.
e) In the Name field, choose the route control profile that you previously created from the drop-down list.
f) In the Direction field, choose Route Export Policy.
g) Click Update.
h) In the Create Subnet dialog box, click OK.
i) (Optional) Repeat to add more subnets.
j) In the Create External Network dialog box, click OK.
Step 15 In the Create Routed Outside dialog box, click Finish.
Step 16 In the Navigation pane, under Tenant_name > Networking expand Bridge Domains.
Note If the L3Out is static, you are not required to choose any BD settings.
Step 19 Choose Create Route Map/Profile, and in the Create Route Map/Profile dialog box, perform the following
actions:
a) From the drop-down list on the Name field, choose default-import.
b) In the Type field, you must click Match Routing Policy Only. Click Submit.
Step 20 (Optional) To enable extra communities to use BGP, using the following steps:
a) Right-click Set Rules for Route Maps, and click Create Set Rules for a Route Map.
b) In the Create Set Rules for a Route Map dialog box, click the Add Communities field, and follow the
steps to assign multiple BGP communities per route prefix.
Step 21 To enable communications between the EPGs consuming the L3Out, create at least one filter and contract,
using the following steps:
a) In the Navigation pane, under the tenant consuming the L3Out, expand Contracts.
b) Right-click Filters and choose Create Filter.
c) In the Name field, enter a filter name.
A filter is essentially an Access Control List (ACL).
d) Click the + icon to expand Entries, and add a filter entry.
e) Add the Entry details.
For example, for a simple web filter, set criteria such as the following:
• EtherType—IP
• IP Protocol—tcp
• Destination Port Range From—Unspecified
• Destination Port Range To to https
f) Click Update.
g) In the Create Filter dialog box, click Submit.
Step 22 To add a contract, use the following steps:
a) Under Contracts, right-click Standard and choose Create Contract.
b) Enter the name of the contract.
c) Click the + icon to expand Subjects to add a subject to the contract.
d) Enter a name for the subject.
e) Click the + icon to expand Filters and choose the filter that you previously created, from the drop-down
list.
f) Click Update.
g) In the Create Contract Subject dialog box, click OK.
h) In the Create Contract dialog box, click Submit.
Step 23 Associate the EPGs for the L3Out with the contract, with the following steps:
In this example, the L3 external EPG (extnw1) is the provider and the application EPG (epg1) is the consumer.
a) To associate the contract to the L3 external EPG, as the provider, under the tenant, click Networking,
expand External Routed Networks, and expand the L3Out.
b) Expand Networks, click the L3 external EPG, and click Contracts.
c) Click the the + icon to expand Provided Contracts.
d) In the Name field, choose the contract that you previously created from the list.
e) Click Update.
f) To associate the contract to an application EPG, as a consumer, under the tenant, navigate to Application
Profiles > app-prof-name > Application EPGs > and expand the app-epg-name.
g) Right-click Contracts, and choose Add Consumed Contract.
h) On the Contract field, choose the contract that you previously created.
i) Click Submit.
Note Layer 3 routed and sub-interface port channels on border leaf switches are supported only on new generation
switches, which are switch models with "EX", "FX" or "FX2" at the end of the switch name.
Note The procedures in this section are meant specifically for configuring port channels as a prerequisite to the
procedures for configuring a Layer 3 routed or sub-interface port channel. For general instructions on
configuring leaf switch port channels, refer to the Cisco APIC Basic Configuration Guide.
• The ACI fabric is installed, APIC controllers are online, and the APIC cluster is formed and healthy.
• An APIC fabric administrator account is available that will enable creating the necessary fabric
infrastructure configurations.
• The target leaf switches are registered in the ACI fabric and available.
Procedure
Step 1 On the APIC menu bar, navigate to Fabric > External Access Policies > Quick Start, and click Configure
Interface, PC, and VPC.
Step 2 In the Configure Interface, PC, and VPC work area, click the large + to select switches to configure.
Step 3 In the Switches section, select a switch ID from the drop-down list of available switch IDs.
Step 4 Click the large + to configure switch interfaces.
Step 5 In the Interface Type field, specify PC as the interface type to use.
Step 6 In the Interfaces field, specify the interface IDs to use.
Step 7 (Optional) In the Interface Selector Name field, enter a unique interface selector name, if desired.
Step 8 In the Interface Policy Group area, specify the interface policies to use. For example, click the Port Channel
Policy drop-down arrow to choose an existing port channel policy or to create a new port channel policy.
Note • Choosing to create a port channel policy displays the Create Port Channel Policy dialog box
where you can specify the policy details and enable features such as symmetric hashing. Also
note that choosing the Symmetric hashing option displays the Load Balance Hashing field,
which enables you to configure hash tuple. However, only one customized hashing option can
be applied on the same leaf switch.
• Symmetric hashing is not supported on the following switches:
• Cisco Nexus 93128TX
• Cisco Nexus 9372PX
• Cisco Nexus 9372PX-E
• Cisco Nexus 9372TX
• Cisco Nexus 9372TX-E
• Cisco Nexus 9396PX
• Cisco Nexus 9396TX
Step 9 In the Attached Device Type field, select the External Routed Devices option.
Step 10 In the Domain field, create a domain or choose one to assign to the interface.
Step 11 If you choose to create a domain, in the VLAN field, select from existing VLAN pools or create a new VLAN
range to assign to the interface.
Step 12 Click Save to update the policy details, then click Submit to submit the switch profile to the APIC.
The APIC creates the switch profile, along with the interface, selector, and attached device type policies.
What to do next
Configure a Layer 3 routed port channel or a Layer 3 sub-interface port channel using the GUI.
Procedure
Step 1 On the APIC menu bar, navigate to Tenants > Tenant > Networking > External Routed Networks >
L3Out > Logical Node Profiles > node > Logical Interface Profiles.
Step 2 Select the interface that you want to configure. The Logical Interface Profile page for that interface opens.
Step 3 Click on Routed Interfaces. The Properties page opens.
Step 4 Click on the Create (+) button to configure the Layer 3 routed port-channel. The Select Routed Interface
page opens.
Step 5 In the Path Type field, select Direct Port Channel.
Step 6 In the Path field, select the port channel that you created previously from the drop-down list. This is the path
to the port channel end points for the interface profile.
Step 7 In the Description field, enter a description of the routed interface.
Step 8 In the IPv4 Primary / IPv6 Preferred Address field, enter the primary IP addresses of the path attached to
the Layer 3 outside profile.
Step 9 In the IPv6 DAD field, select disabled or enabled.
See "Configuring IPv6 Neighbor Discovery Duplicate Address Detection" for more information for this field.
Step 10 In the IPv4 Secondary / IPv6 Additional Addresses field, enter the secondary IP addresses of the path
attached to the Layer 3 outside profile.
See "Configuring IPv6 Neighbor Discovery Duplicate Address Detection" for more information for the IPv6
DAD field in the Create Secondary IP Address screen.
Step 11 Check the ND RA Prefix box if you wish to enable a Neighbor Discovery Router Advertisement prefix for
the interface. The ND RA Prefix Policy option appears.
When this is enabled, the routed interface is available for auto configuration and the prefix is sent to the host
for auto-configuration.
While ND RA Interface policies are deployed under BDs and/or Layer 3 Outs, ND prefix policies are deployed
for individual subnets. The ND prefix policy is on a subnet level.
The ND RA Prefix applies only to IPv6 addresses.
Step 12 If you checked the ND RA Prefix box, select the ND RA Prefix policy that you want to use. You can select
the default policy or you can choose to create your own ND RA prefix policy. If you choose to create your
own policy, the Create ND RA Prefix Policy screen appears:
a) In the Name field, enter the Router Advertisement (RA) name for the prefix policy.
b) In the Description field, enter a description of the prefix policy.
c) In the Controller State field, check the desired check boxes for the controller administrative state. More
than one can be specified. The default is Auto Configuration and On link.
d) In the Valid Prefix Lifetime field, choose the desired value for the length of time that you want the prefix
to be valid. The range is from 0 to 4294967295 milliseconds. The default is 2592000.
e) In the Preferred Prefix Lifetime field, choose the desired value for the preferred lifetime of the prefix.
The range is from 0 to 4294967295 milliseconds. The default is 604800.
f) Click Submit.
Step 13 In the MAC Address field, enter the MAC address of the path attached to the Layer 3 outside profile.
Step 14 In the MTU (bytes) field, set the maximum transmit unit of the external network. The range is 576 to 9216.
To inherit the value, enter inherit in the field.
Step 15 In the Target DSCP field, select the target differentiated services code point (DSCP) of the path attached to
the Layer 3 outside profile from the drop-down list.
Step 16 In the Link-local Address field, enter an IPv6 link-local address. This is the override of the system-generated
IPv6 link-local address.
Step 17 Click Submit.
Step 18 Determine if you want to configure Layer 3 Multicast for this port channel.
To configure Layer 3 Multicast for this port channel:
a) On the APIC menu bar, navigate to the Layer 3 Out that you selected for this port channel (Tenants >
Tenant > Networking > External Routed Networks > L3Out).
b) Click on the Policy tab to access the Properties screen for the Layer 3 Out.
c) In the Properties screen for the Layer 3 Out, scroll down to the PIM field, then click the check box next
to that field to enable PIM.
This enables PIM on all interfaces under the Layer 3 Out, including this port channel.
d) Configure PIM on the external router.
You have to have a PIM session from the external router to the port channel. Refer to the documentation
that you received with the external router for instructions on configuring PIM on your external router.
e) Map the port channel L3 Out to a VRF that has Multicast enabled.
See IP Multicast, on page 197 for those instructions. Note the following:
• You will select a specific VRF that has Multicast enabled as part of this port channel L3 Out to VRF
mapping process. In the Multicast screen for that VRF, if you do not see the L3 Out for this port
channel when you try to select an L3 Out in the Interfaces area, go back to the L3 Out for this port
channel, go to the Policy tab, select the appropriate VRF, then click Submit and Submit Changes.
The L3 Out for this port channel should now be available in the Multicast screen for that VRF.
• You have to configure a Rendezvous Point (RP) for Multicast, an IP address that is external to the
fabric. You can specify static RP, auto RP, fabric RP, or bootstrap router for the RP. For example,
if you choose static RP, the IP address would be present on the external router, and APIC will learn
this IP address through the L3 Out. See IP Multicast, on page 197 for more information.
Procedure
Step 1 On the APIC menu bar, navigate to Tenants > Tenant > Networking > External Routed Networks >
L3Out > Logical Node Profiles > node > Logical Interface Profiles.
Step 2 Select the interface that you want to configure. The Logical Interface Profile page for that interface opens.
Step 3 Click on Routed Sub-interfaces. The Properties page opens.
Step 4 Click on the Create (+) button to configure the Layer 3 routed sub-interface port-channel. The Select Routed
Sub-Interface page opens.
Step 5 In the Path Type field, select Direct Port Channel.
Step 6 In the Path field, select the port channel that you created previously from the drop-down list. This is the path
to the port channel end points for the interface profile.
Step 7 In the Description field, enter a description of the routed interface.
Step 8 In the Encap field, select VLAN from the drop-down menu. This is the encapsulation of the path attached to
the Layer 3 outside profile. Enter an integer value for this entry.
Step 9 In the IPv4 Primary / IPv6 Preferred Address field, enter the primary IP addresses of the path attached to
the Layer 3 outside profile.
Step 10 In the IPv6 DAD field, select disabled or enabled.
See "Configuring IPv6 Neighbor Discovery Duplicate Address Detection" for more information for this field.
Step 11 In the IPv4 Secondary / IPv6 Additional Addresses field, enter the secondary IP addresses of the path
attached to the Layer 3 outside profile.
See "Configuring IPv6 Neighbor Discovery Duplicate Address Detection" for more information for the IPv6
DAD field in the Create Secondary IP Address screen.
Step 12 Check the ND RA Prefix box if you wish to enable a Neighbor Discovery Router Advertisement prefix for
the interface. The ND RA Prefix Policy option appears.
When this is enabled, the routed interface is available for auto configuration and the prefix is sent to the host
for auto-configuration.
While ND RA Interface policies are deployed under BDs and/or Layer 3 Outs, ND prefix policies are deployed
for individual subnets. The ND prefix policy is on a subnet level.
The ND RA Prefix applies only to IPv6 addresses.
Step 13 If you checked the ND RA Prefix box, select the ND RA Prefix policy that you want to use. You can select
the default policy or you can choose to create your own ND RA prefix policy. If you choose to create your
own policy, the Create ND RA Prefix Policy screen appears:
a) In the Name field, enter the Router Advertisement (RA) name for the prefix policy.
b) In the Description field, enter a description of the prefix policy.
c) In the Controller State field, check the desired check boxes for the controller administrative state. More
than one can be specified. The default is Auto Configuration and On link.
d) In the Valid Prefix Lifetime field, choose the desired value for the length of time that you want the prefix
to be valid. The range is from 0 to 4294967295 milliseconds. The default is 2592000.
e) In the Preferred Prefix Lifetime field, choose the desired value for the preferred lifetime of the prefix.
The range is from 0 to 4294967295 milliseconds. The default is 604800.
f) Click Submit.
Step 14 In the MAC Address field, enter the MAC address of the path attached to the Layer 3 outside profile.
Step 15 In the MTU (bytes) field, set the maximum transmit unit of the external network. The range is 576 to 9216.
To inherit the value, enter inherit in the field.
Step 16 In the Link-local Address field, enter an IPv6 link-local address. This is the override of the system-generated
IPv6 link-local address.
Verification: Use the CLI show int command on the leaf switches where the external switch is attached to
verify that the vpc is configured accordingly.
Procedure
Step 3 interface port-channel channel-name Enters the interface configuration mode for the
specified port channel.
Example:
apic1(config-leaf)# interface
port-channel po1
Step 5 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual
routing and forwarding (VRF) instance and L3
Example:
outside policy, where:
apic1(config-leaf-if)# vrf member v1
tenant t1 • vrf-name is the VRF name. The name can
be any case-sensitive, alphanumeric string
up to 32 characters.
Step 6 vlan-domain member vlan-domain-name Associates the port channel template with the
previously configured VLAN domain.
Example:
apic1(config-leaf-if)# vlan-domain
member dom1
Step 7 ip address ip-address / subnet-mask Sets the IP address and subnet mask for the
specified interface.
Example:
apic1(config-leaf-if)# ip address
10.1.1.1/24
Step 8 ipv6 address sub-bits/prefix-length preferred Configures an IPv6 address based on an IPv6
general prefix and enables IPv6 processing on
Example:
an interface, where:
apic1(config-leaf-if)# ipv6 address
2001::1/64 preferred • sub-bits is the subprefix bits and host bits
of the address to be concatenated with the
prefixes provided by the general prefix
specified with the prefix-name argument.
The sub-bits argument must be in the
form documented in RFC 2373 where the
address is specified in hexadecimal using
16-bit values between colons.
• prefix-length is the length of the IPv6
prefix. A decimal value that indicates how
many of the high-order contiguous bits
of the address comprise the prefix (the
network portion of the address). A slash
mark must precede the decimal value.
Step 11 mtu mtu-value Sets the MTU for this class of service.
Example:
apic1(config-leaf-if)# mtu 1500
Example
This example shows how to configure a basic Layer 3 port channel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface port-channel po1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member v1 tenant t1
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# ip address 10.1.1.1/24
apic1(config-leaf-if)# ipv6 address 2001::1/64 preferred
apic1(config-leaf-if)# ipv6 link-local fe80::1
apic1(config-leaf-if)# mac-address 00:44:55:66:55::01
apic1(config-leaf-if)# mtu 1500
Procedure
Step 3 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual
routing and forwarding (VRF) instance and L3
Example:
outside policy, where:, where:
apic1(config-leaf-if)# vrf member v1
tenant t1 • vrf-name is the VRF name. The name can
be any case-sensitive, alphanumeric string
up to 32 characters.
• tenant-name is the tenant name. The name
can be any case-sensitive, alphanumeric
string up to 32 characters.
Step 5 ip address ip-address / subnet-mask Sets the IP address and subnet mask for the
specified interface.
Example:
apic1(config-leaf-if)# ip address
10.1.1.1/24
Step 6 ipv6 address sub-bits/prefix-length preferred Configures an IPv6 address based on an IPv6
general prefix and enables IPv6 processing on
Example:
an interface, where:
apic1(config-leaf-if)# ipv6 address
2001::1/64 preferred • sub-bits is the subprefix bits and host bits
of the address to be concatenated with the
prefixes provided by the general prefix
specified with the prefix-name argument.
The sub-bits argument must be in the
form documented in RFC 2373 where the
address is specified in hexadecimal using
16-bit values between colons.
• prefix-length is the length of the IPv6
prefix. A decimal value that indicates how
many of the high-order contiguous bits
of the address comprise the prefix (the
network portion of the address). A slash
mark must precede the decimal value.
Step 9 mtu mtu-value Sets the MTU for this class of service.
Example:
apic1(config-leaf-if)# mtu 1500
Step 12 vlan-domain member vlan-domain-name Associates the port channel template with the
previously configured VLAN domain.
Example:
apic1(config-leaf-if)# vlan-domain
member dom1
Step 14 interface port-channel channel-name.number Enters the interface configuration mode for the
specified sub-interface port channel.
Example:
apic1(config-leaf)# interface
port-channel po1.2001
Step 15 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual
routing and forwarding (VRF) instance and L3
Example:
outside policy, where:, where:
apic1(config-leaf-if)# vrf member v1
tenant t1 • vrf-name is the VRF name. The name can
be any case-sensitive, alphanumeric string
up to 32 characters.
• tenant-name is the tenant name. The name
can be any case-sensitive, alphanumeric
string up to 32 characters.
Example
This example shows how to configure a basic Layer 3 sub-interface port-channel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 2001
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member v1 tenant t1
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# ip address 10.1.1.1/24
apic1(config-leaf-if)# ipv6 address 2001::1/64 preferred
apic1(config-leaf-if)# ipv6 link-local fe80::1
Procedure
Step 3 interface Ethernet slot/port Enters interface configuration mode for the
interface you want to configure.
Example:
apic1(config-leaf)# interface Ethernet
1/1-2
Example
This example shows how to add ports to a Layer 3 port-channel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface Ethernet 1/1-2
apic1(config-leaf-if)# channel-group p01
Note The procedures in this section are meant specifically for configuring port channels as a prerequisite to the
procedures for configuring a Layer 3 routed or sub-interface port channel. For general instructions on
configuring leaf switch port channels, refer to the Cisco APIC Basic Configuration Guide or Cisco APIC
Layer 2 Networking Configuration Guide.
• The ACI fabric is installed, APIC controllers are online, and the APIC cluster is formed and healthy.
• An APIC fabric administrator account is available that will enable creating the necessary fabric
infrastructure configurations.
• The target leaf switches are registered in the ACI fabric and available.
Note In the following REST API example, long single lines of text are broken up with the \ character to improve
readability.
Procedure
To configure a port channel using the REST API, send a post with XML such as the following:
Example:
<polUni>
<infraInfra dn="uni/infra">
<infraNodeP name="test1">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk" from_="101" to_="101"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
</infraNodeP>
<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1" fromCard="1" toCard="1" fromPort="18" \
toPort="19"/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-po17_PolGrp"/>
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccBndlGrp name="po17_PolGrp" lagT="link">
<infraRsHIfPol tnFabricHIfPolName="default"/>
<infraRsCdpIfPol tnCdpIfPolName="default"/>
<infraRsLacpPol tnLacpLagPolName="default"/>
</infraAccBndlGrp>
</infraFuncP>
</infraInfra>
</polUni>
What to do next
Configure a Layer 3 routed port channel or sub-interface port channel using the REST API.
Note In the following REST API example, long single lines of text are broken up with the \ character to improve
readability.
Procedure
To configure a Layer 3 route to the port channels that you created previously using the REST API, send a
post with XML such as the following:
Example:
<polUni>
<fvTenant name=pep9>
<l3extOut descr="" dn="uni/tn-pep9/out-routAccounting" enforceRtctrl="export" \
name="routAccounting" nameAlias="" ownerKey="" ownerTag="" \
targetDscp="unspecified">
<l3extRsL3DomAtt tDn="uni/l3dom-Dom1"/>
<l3extRsEctx tnFvCtxName="ctx9"/>
<l3extLNodeP configIssues="" descr="" name="node101" nameAlias="" ownerKey="" \
ownerTag="" tag="yellow-green" targetDscp="unspecified">
<l3extRsNodeL3OutAtt rtrId="10.1.0.101" rtrIdLoopBack="yes" \
tDn="topology/pod-1/node-101">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="no" \
fabricExtIntersiteCtrlPeering="no" name="" nameAlias="" spineRole=""/>
</l3extRsNodeL3OutAtt>
<l3extLIfP descr="" name="lifp17" nameAlias="" ownerKey="" ownerTag="" \
tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="" nameAlias="">
<ospfRsIfPol tnOspfIfPolName=""/>
</ospfIfP>
<l3extRsPathL3OutAtt addr="10.1.5.3/24" autostate="disabled" descr="" \
encap="unknown" encapScope="local" ifInstT="l3-port" llAddr="::" \
Note In the following REST API example, long single lines of text are broken up with the \ character to improve
readability.
Procedure
To configure a Layer 3 sub-interface route to the port channels that you created previously using the REST
API, send a post with XML such as the following:
Example:
<polUni>
<fvTenant name=pep9>
L3Outs QoS
L3Out QoS can be configured using Contracts applied at the external EPG level. Starting with Release 4.0(1),
L3Out QoS can also be configured directly on the L3Out interfaces.
Note If you are running Cisco APIC Release 4.0(1) or later, we recommend using the custom QoS policies applied
directly to the L3Out to configure QoS for L3Outs.
Packets are classified using the ingress DSCP or CoS value so it is possible to use custom QoS policies to
classify the incoming traffic into Cisco ACI QoS queues. A custom QoS policy contains a table mapping the
DSCP/CoS values to the user queue and to the new DSCP/CoS value (in case of marking). If there is no
mapping for a specific DSCP/CoS value, the user queue is selected by the QoS priority setting of the ingress
L3Out interface if configured.
Starting with Release 4.0(1), custom QoS setting can be configured directly on an L3Out and applied
for the traffic coming from the border leaf, as such, the VRF does not need to be in egress mode.
• To enable the QoS policy to be enforced, the VRF Policy Control Enforcement Preference must be
"Enforced."
• When configuring the Contract that controls communication between the L3Out and other EPGs, include
the QoS class or target DSCP in the contract or subject.
Note Only configure a QoS class or target DSCP in the contract, not in the external
EPG (l3extInstP).
• When creating a contract subject, you must choose a QoS priority level. You cannot choose Unspecified.
Note With the exception of Custom QoS Policies as a custom QoS Policy will set the
DSCP/CoS value even if the QoS Class is set to Unspecified. When QoS level
is unspecified, it by default takes as Level 3 default queue. No unspecified is
supported and valid.
• Starting with Release 4.0(1), QoS supports new levels 4, 5, and 6 configured under Global policies, EPG,
L3out, custom QoS, and Contracts. The following limitations apply:
• Number of classes that can be configured with Strict priority is up to 5.
• The 3 new classes are not supported with non-EX and non-FX switches.
• If traffic flows between non-EX or non-FX switches and EX or FX switches, the traffic will use
QoS level 3.
• For communicating with FEX for new classes, the traffic carries a Layer 2 COS value of 0.
• Starting with Release 4.0(1), you can configure QoS Class or create a Custom QoS Policy to apply on
an L3Out Interface.
Procedure
Step 1 From the main menu bar, select Tenants > <tenant-name>.
Step 2 In the left-hand navigation pane, expand Tenant <tenant-name> > Networking > External Routed
Networks > <routed-network-name> > Logical Node Profiles > <node-profile-name> > Logical Interface
Profiles > <interface-profile-name>.
You may need to create new network, node profile, and interface profile if none exist.
Step 3 In the main window pane, configure custom QoS for your L3Out.
You can choose to configure a standard QoS level priority using the QoS Priority dropdown menu.
Alternatively, you can set an existing or create a new custom QoS policy from the Custom QoS Policy
dropdown.
Procedure
Procedure
tDn="topology/pod-1/paths-102/pathep-[eth1/37]"/>
<l3extRsNdIfPol annotation="" tnNdIfPolName=""/>
<l3extRsLIfPCustQosPol tnQosCustomPolName="vrfQos002"/>
</l3extLIfP>
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using REST API, on page 64 instead.
Procedure
Step 1 When configuring the tenant, VRF, and bridge domain, configure the VRF for egress mode
(pcEnfDir="egress") with policy enforcement enabled (pcEnfPref="enforced"). Send a post with XML
similar to the following example:
Example:
<fvTenant name="t1">
<fvCtx name="v1" pcEnfPref="enforced" pcEnfDir="egress"/>
<fvBD name="bd1">
<fvRsCtx tnFvCtxName="v1"/>
<fvSubnet ip="44.44.44.1/24" scope="public"/>
<fvRsBDToOut tnL3extOutName="l3out1"/>
</fvBD>"/>
</fvTenant>
Step 2 When creating the filters and contracts to enable the EPGs participating in the L3Out to communicate, configure
the QoS priority.
The contract in this example includes the QoS priority, level1, for traffic ingressing on the L3Out.
Alternatively, it could define a target DSCP value. QoS policies are supported on either the contract or the
subject.
The filter also has the matchDscp="EF" criteria, so that traffic with this specific TAG received by the L3out
processes through the queue specified in the contract subject.
Note VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement
need be egress, only when the QOS classification is going to be done in the contract for traffic
between EPG and L3out or L3out to L3out.
Note If QOS classification is set in the contract and VRF enforcement is egress, then contract QOS
classification would override the L3out interface QOS or Custom QOS classification, So either we
need to configure this one or the new one.
Example:
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp" matchDscp="EF"/>
</vzFilter>
<vzBrCP name="httpCtrct" prio="level1" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using CLI, on page 63 instead.
Procedure
Step 1 Configure the VRF for egress mode and enable policy enforcement to support QoS priority enforcement on
the L3Out.
Example:
apic1# configure
apic1(config)# tenant t1
apic1(config-tenant)# vrf context v1
apic1(config-tenant-vrf)# contract enforce egress
apic1(config-tenant-vrf)# exit
apic1(congig-tenant)# exit
apic1(config)#
Example:
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# match dscp EF
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using GUI, on page 62 instead.
Procedure
Step 1 Configure the VRF instance for the tenant consuming the L3Out to support QoS to be enforced on the border
leaf switch that is used by the L3Out.
a) From the main menu bar, choose Tenants > <tenant-name>.
b) In the Navigation pane, expand Networking, right-click VRFs, and choose Create VRF.
c) Enter the name of the VRF.
d) In the Policy Control Enforcement Preference field, choose Enforced.
VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement need
be egress, only when the QOS classification is going to be done in the contract for traffic between EPG
and L3out or L3out to L3out.
e) In the Policy Control Enforcement Direction choose Egress
Not required, please see the above comment.
f) Complete the VRF configuration according to the requirements for the L3Out.
Step 2 When configuring filters for contracts to enable communication between the EPGs consuming the L3Out,
include a QoS class or target DSCP to enforce the QoS priority in traffic ingressing through the L3Out.
a) On the Navigation pane, under the tenant that that will consume the L3Out, expand Contracts, right-click
Filters and choose Create Filter.
b) In the Name field, enter a filter name.
c) In the Entries field, click + to add a filter entry.
d) Add the Entry details, click Update and Submit.
e) Expand the previously created filter and click on a filter entry.
f) Set the Match DSCP field to the desired DSCP level for the entry, for example, EF.
Step 3 Add a contract.
a) Under Contracts, right-click Standard and choose Create Contract.
can enable a router ID loopback which is enabled by default. If the router ID loopback is disabled, no
loopback is created for the specific Layer 3 outside on which it is deployed.
• This configuration task is applicable for iBGP and eBGP. If the BGP configuration is on a loopback
address then it can be an iBGP session or a multi-hop eBGP session. If the peer IP address is for a physical
interface where the BGP peer is defined, then the physical interface is used.
• The user must configure an IPv6 address to enable peering over loopback using IPv6.
• The autonomous system feature can only be used for eBGP peers. It enables a router to appear to be a
member of a second autonomous system (AS), in addition to its real AS. Local AS allows two ISPs to
merge without modifying peering arrangements. Routers in the merged ISP become members of the new
autonomous system but continue to use their old AS numbers for their customers.
• Starting with release 1.2(1x), tenant networking protocol policies for BGP l3extOut connections can be
configured with a maximum prefix limit that enables monitoring and restricting the number of route
prefixes received from a peer. Once the max prefix limit is exceeded, a log entry can be recorded, further
prefixes can be rejected, the connection can be restarted if the count drops below the threshold in a fixed
interval, or the connection is shut down. Only one option can be used at a time. The default setting is a
limit of 20,000 prefixes, after which new prefixes are rejected. When the reject option is deployed, BGP
accepts one more prefix beyond the configured limit and the APIC raises a fault.
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3
Outside (L3Out) connections to external routers, or multipod connections through
an Inter-Pod Network (IPN), it is critical that the MTU is set appropriately on
both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max
packet size to be set as 9216 bytes for ACI and 9000 for NX-OS and IOS).
However, other platforms such as IOS-XR configure the MTU value exclusive
of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration
guides.
We highly recommend that you test the MTU using CLI-based commands. For
example, on the Cisco NX-OS CLI, use a command such as ping 1.1.1.1 df-bit
packet-size 9000 source-interface ethernet 1/1.
Procedure
Step 1 In the Navigation pane, expand Tenant_name > Networking > External Routed Networks.
Step 2 Right-click, and click Create Routed Outside.
Step 3 In the Create Routed Outside dialog box, perform the following actions:
a) In the Name field, enter a name for the external routed network policy.
b) Click the BGP checkbox.
Note BGP peer reachability must be available in one of two ways. You must either configure static
routes or enable OSPF.
c) (Optional) In the Route Control Enforcement field, check the Import check box.
Note Check this check box if you wish to enforce import control with BGP.
d) From the VRF field drop-down list, choose the desired VRF.
e) Expand the Route Control for Dampening field, and choose the desired address family type and route
dampening policy. Click Update.
In this step, the policy can be created either with step 4 or there is also an option to Create route profile
in the drop-down list where the policy name is selected.
f) Expand Nodes and Interfaces Protocol Policies.
g) In the Create Node Profile dialog box, enter a name for the node profile.
h) Expand Nodes.
i) From the Select Node dialog box, from the Node ID field drop-down list, choose a node.
j) In the Router ID field, enter the router ID.
k) Expand Loopback Address, and in the IP field, enter the IP address. Click Update.
Note Enter an IPv6 address. If you did not add the router ID in the earlier step, you can add an IPv4
address in the IP field.
l) Click OK.
Step 4 In the Navigation pane, expand Tenant_name > Networking > Route Profiles. Right-click Route Profiles,
and click Create Route Profile. In the Create Route Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the route control VRF.
b) Expand the Create Route Control Context dialog box.
c) In the Name field, enter a name for the route control VRF.
d) From the Set Attribute drop-down list, choose Create Action Rule Profile.
When creating an action rule, set the route dampening attributes as desired.
Step 5 In the Create Interface Profiles dialog box, perform the following actions:
a) In the Name field, enter an interface profile name.
b) In the Interfaces area, choose the desired interface tab, and then expand the interface.
Step 6 In the Select Routed Interface dialog box, perform the following actions:
a) From the Path field drop-down list, choose the node and the interface.
b) In the IP Address field, enter the IP address.
Note Depending upon your requirements, you can add an IPv6 address or an IPv4 address.
c) (Optional) If you entered an IPv6 address in the earlier step, in the Link-local Address field, enter an
IPv6 address.
d) Expand BGP Peer Connectivity Profile field.
Step 7 In the Create Peer Connectivity Profile dialog box, perform the following actions:
a) In the Peer Address field, the dynamic neighbor feature is available. If desired by the user, any peer
within a specified subnet can communicate or exchange routes with BGP.
Enter an IPv4 of an IPv6 address to correspond with IPv4 or IPv6 addresses entered in the earlier in the
steps.
b) In the BGP Controls field, check the desired controls.
c) In the Autonomous System Number field, choose the desired value.
d) (Optional) In the Weight for routes from this neighbor field, choose the desired value.
e) (Optional) In the Private AS Control field, check the check box for Remove AS.
f) (Optional) In the Local Autonomous System Number Config field, choose the desired value.
Optionally required for the local autonomous system feature for eBGP peers.
g) (Optional) In the Local Autonomous System Number field, choose the desired value.
Optionally required for the local autonomous system feature for eBGP peers.
Note The value in this field must not be the same as the value in the Autonomous System Number
field.
h) Click OK.
Step 8 Perform the following actions:
a) In the Select Routed Interface dialog box, click OK.
d) In the Scope field, check the check boxes for Export Route Control Subnet, Import Route Control
Subnet, and Security Import Subnet. Click OK.
Note Check the Import Route Control Subnet check box if you wish to enforce import control with
BGP.
Configuring BGP External Routed Network Using the NX-OS Style CLI
Procedure
The following shows how to configure the BGP external routed network using the NX-OS CLI:
Example:
Procedure
Example:
</bgpPeerP>
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-l3-dom"/>
<l3extRsDampeningPol af="ipv6-ucast" tnRtctrlProfileName="damp_rp"/>
<l3extRsDampeningPol af="ipv4-ucast" tnRtctrlProfileName="damp_rp"/>
<l3extInstP descr="" matchT="AtleastOne" name="l3extInstP_1" prio="unspecified"
targetDscp="unspecified">
<l3extSubnet aggregate="" descr="" ip="130.130.130.0/24" name="" scope="import-rtctrl">
</l3extSubnet>
<l3extSubnet aggregate="" descr="" ip="130.130.131.0/24" name="" scope="import-rtctrl"/>
<l3extSubnet aggregate="" descr="" ip="120.120.120.120/32" name=""
scope="export-rtctrl,import-security"/>
<l3extSubnet aggregate="" descr="" ip="3001::130:130:130:100/120" name=""
scope="import-rtctrl"/>
</l3extInstP>
<bgpExtP descr=""/>
</l3extOut>
<rtctrlProfile descr="" dn="uni/tn-t1/prof-damp_rp" name="damp_rp" ownerKey="" ownerTag=""
type="combinable">
<rtctrlCtxP descr="" name="ipv4_rpc" order="0">
<rtctrlScope descr="" name="">
<rtctrlRsScopeToAttrP tnRtctrlAttrPName="act_rule"/>
</rtctrlScope>
</rtctrlCtxP>
</rtctrlProfile>
<rtctrlAttrP descr="" dn="uni/tn-t1/attr-act_rule" name="act_rule">
<rtctrlSetDamp descr="" halfLife="15" maxSuppressTime="60" name="" reuse="750"
suppress="2000" type="dampening-pol"/>
</rtctrlAttrP>
Procedure
Step 1 Log in to the APIC GUI, and on the menu bar, click Tenants > <Your_Tenant> > Networking > Protocol
Policies > BGP > BGP Address Family Contextand right click Create BGP Address Family Context
Policy.
Step 2 In the Create BGP Address Family Context Policy dialog box, perform the following tasks:
a) In the Name field, enter a name for the policy.
b) Click the eBGP Distance field confirm the value for your implementation.
c) Click the iBGP Distance field confirm the value for your implementation.
d) Click the Local Distance field confirm the value for your implementation.
e) Click the eBGP Max ECMP field confirm the value for your implementation.
f) Click the iBGP Max ECMP field confirm the value for your implementation.
g) Click Submit after you have updated your entries.
Step 3 Click Tenants > <Your_Tenant> > Networking > VRFs > <your_VRF>
Step 4 Review the configuration details of the subject VRF.
Step 5 Access the BGP Context Per Address Family field and select IPv6 in the Address Family drop-down list.
Step 6 Access the BGP Address Family Context you created in the BGP Address Family Context drop-down list
and associate it with the subject VRF.
Step 7 Click Submit.
Example:
apic1(config)# leaf 101
apic1(config-leaf)# template bgp address-family newAf tenant t1
This template will be available on all nodes where tenant t1 has a VRF deployment
apic1(config-bgp-af)# maximum-paths ?
<1-16> Maximum number of equal-cost paths for load sharing. The default is 16.
ibgp Configure multipath for IBGP paths
apic1(config-bgp-af)# maximum-paths 10
apic1(config-bgp-af)# maximum-paths ibpg 8
apic1(config-bgp-af)# end
apic1#
no maximum-paths [ibgp]
Prepend Appends the specified AS number to the AS path of the route matched by
the route map.
Note • You can configure more than one AS number.
• 4 byte AS numbers are supported.
• You can prepend a total 32 AS numbers. You must specify
the order in which the AS Number is inserted into the AS
Path attribute.
Prepend-last-as Prepends the last AS numbers to the AS path with a range between 1 and 10.
The following table describes the selection criteria for implementation of AS Path Prepend:
Procedure
Step 1 Log in to the APIC GUI, and on the menu bar, click Tenants > <Your_Tenant> > Networking > External
Routed Networks > Set Rules for Route Mapsand right click Create Set Rules For A Route Map.
Step 2 In the Create Set Rules For A Route Map dialog box, perform the following tasks:
a) In the Name field, enter a name.
b) Click the Set AS Path icon to open the Create Set AS Path dialog box.
Step 3 Select the criterion Prepend AS to prepend AS numbers.
Step 4 Enter the AS number and its order and then click Update. Repeat if multiple AS numbers must be prepended.
Step 5 Select the criterion Prepend Last-AS to prepend the last AS number a specified number of times.
Step 6 Enter Count (1-10).
Step 7 On the Create Set Rules For A Route Map display, confirm the listed criteria for the set rule based on AS
Path and click Finish.
Step 8 On the APIC GUI menu bar, click Tenants > <Your_Tenant> > Networking > External Routed Networks >
Set Rules for Route Maps and right click your profile.
Step 9 Confirm the Set AS Path values the bottom of the screen.
Procedure
To modify the autonomous system path (AS Path) for Border Gateway Protocol (BGP) routes, you can use
the set as-path command. The set as-path command takes the form of
apic1(config-leaf-vrf-template-route-profile)# set as-path {'prepend as-num [ ,... as-num ]
| prepend-last-as num}
Example:
apic1(config)# leaf 103
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# template route-profile rp1
apic1(config-leaf-vrf-template-route-profile)# set as-path ?
prepend Prepend to the AS-Path
prepend-last-as Prepend last AS to the as-path
apic1(config-leaf-vrf-template-route-profile)# set as-path prepend 100, 101, 102, 103
apic1(config-leaf-vrf-template-route-profile)# set as-path prepend-last-as 8
apic1(config-leaf-vrf-template-route-profile)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# exit
What to do next
To disable AS Path prepend, use the no form of the shown command:
apic1(config-leaf-vrf-template-route-profile)# [no] set
as-path { prepend as-num [ ,... as-num ] | prepend-last-as num}
<l3extOut name="out1">
<rtctrlProfile name="rp1">
<rtctrlCtxP name="ctxp1" order="1">
<rtctrlScope>
<rtctrlRsScopeToAttrP tnRtctrlAttrPName="attrp1"/>
</rtctrlScope>
</rtctrlCtxP>
</rtctrlProfile>
</l3extOut>
</fvTenant>
Figure 14: Example Topology Illustrating the Autonomous System Override Process
Router 1 and Router 2 are the two customers with multiple sites (Site-A and Site-B). Customer Router 1
operates under AS 100 and customer Router 2 operates under AS 200.
The above diagram illustrates the Autonomous System (AS) override process as follows:
1. Router 1-Site-A advertises route 10.3.3.3 with AS100.
2. Router PE-1 propagates this as an internal route to PE2 as AS100.
3. Router PE-2 prepends 10.3.3.3 with AS121 (replaces 100 in the AS path with 121), and propagates the
prefix.
4. Router 2-Site-B accepts the 10.3.3.3 update.
Configuring BGP External Routed Network with Autonomous System Override Enabled Using the
GUI
Procedure
Step 1 On the menu bar, choose Tenants > Tenant_name > Networking > External Routed Network > Non-GOLF
Layer 3 Out_name > Logical Node Profiles.
Step 2 In the Navigation pane, choose the appropriate BGP Peer Connectivity Profile.
Step 3 In the Work pane, under Properties for the BGP Peer Connectivity Profile, in the BGP Controls field,
perform the following actions: to
a) Check the check box for the AS override field to enable the Autonomous System override function.
b) Check the check box for the Disable Peer AS Check field.
Note You must check the check boxes for AS override and Disable Peer AS Check for the AS
override feature to take effect.
Configuring BGP External Routed Network with Autonomous System Override Enabled Using the
REST API
Procedure
Configure the BGP External Routed Network with Autonomous override enabled.
Note The line of code that is in bold displays the BGP AS override portion of the configuration. This
feature was introduced in the Cisco APIC Release 3.1(2m).
Example:
<fvTenant name="coke">
<fvBD name="cokeBD">
<!-- Association from Bridge Doamin to Private Network -->
<fvRsCtx tnFvCtxName="coke" />
<fvRsBDToOut tnL3extOutName="routAccounting" />
<!-- Subnet behind the bridge domain-->
<fvSubnet ip="20.1.1.1/16" scope="public"/>
<fvSubnet ip="2000:1::1/64" scope="public"/>
</fvBD>
<fvBD name="cokeBD2">
<!-- Association from Bridge Doamin to Private Network -->
<fvRsCtx tnFvCtxName="coke" />
<fvRsBDToOut tnL3extOutName="routAccounting" />
<!-- Subnet behind the bridge domain-->
<fvSubnet ip="30.1.1.1/16" scope="public"/>
</fvBD>
<vzBrCP name="webCtrct" scope="global">
<vzSubj name="http">
<vzRsSubjFiltAtt tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
/>
<fvRsProv tnVzBrCPName="webCtrct"/>
</l3extInstP>
<l3extRsEctx tnFvCtxName="coke"/>
</l3extOut>
<fvAp name="cokeAp">
<fvAEPg name="cokeEPg" >
<fvRsBd tnFvBDName="cokeBD" />
<fvRsPathAtt tDn="topology/pod-1/paths-103/pathep-[eth1/20]" encap="vlan-100"
instrImedcy="immediate" mode="regular"/>
<fvRsCons tnVzBrCPName="webCtrct"/>
</fvAEPg>
<fvAEPg name="cokeEPg2" >
<fvRsBd tnFvBDName="cokeBD2" />
<fvRsPathAtt tDn="topology/pod-1/paths-103/pathep-[eth1/20]" encap="vlan-110"
instrImedcy="immediate" mode="regular"/>
<fvRsCons tnVzBrCPName="webCtrct"/>
</fvAEPg>
</fvAp>
</l3extRsNodeL3OutAtt>
<l3extLIfP name='portIfV4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/17]"
encap='vlan-1010' ifInstT='sub-interface' addr="20.1.12.2/24">
</l3extRsPathL3OutAtt>
</l3extLIfP>
<l3extLIfP name='portIfV6'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/17]"
encap='vlan-1010' ifInstT='sub-interface' addr="64:ff9b::1401:302/120">
/>
</l3extInstP>
<l3extRsEctx tnFvCtxName="coke"/>
</l3extOut>
</fvTenant>
On a node, the BGP timer policy is chosen based on the following algorithm:
• If bgpProtP is specified, then use bgpCtxPol referred to under bgpProtP.
• Else, if specified, use bgpCtxPol referred to under corresponding fvCtx.
• Else, if specified, use the default policy under the tenant, for example,
uni/tn-<tenant>/bgpCtxP-default.
• Else, use the default policy under tenant common, for example, uni/tn-common/bgpCtxP-default. This
one is pre-programmed.
Configuring a Per VRF Per Node BGP Timer Using the Advanced GUI
When a BGP timer is configured on a specific node, then the BGP timer policy on the node is used and the
BGP policy timer associated with the VRF is ignored.
Procedure
Step 1 On the menu bar, choose Tenant > Tenant_name > Networking > Protocol Policies. In the Navigation
pane, expand Networking > Protocol Policies > BGP > BGP Timers.
Step 2 In the create BGP Timers Policy dialog box, perform the following actions:
a) In the Name field, enter the BGP Timers policy name.
b) In the available fields, choose the appropriate values as desired. Click Submit.
A BGP timer policy is created.
Step 3 Navigate to the External Routed Network, and create a Layer 3 Out with BGP enabled by performing the
following actions:
a) Right-click Create Routed Outside.
b) In the Create Routed Outside dialog box, specify the name of the Layer 3 Out.
c) Check the check box to enable BGP.
d) Expand Nodes and Interfaces Protocol Policies.
Step 4 To create a new node, in the Create Node Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the node profile.
b) In the BGP Timers field, from the drop-down list, choose the BGP timer policy that you want to associate
with this specific node. Click Finish.
A specific BGP timer policy is now applied to the node.
Note To associate an existing node profile with a BGP timer policy, right-click the node profile, and
associate the timer policy.
If a timer policy is not chosen specifically in the BGP Timers field for the node, then the BGP
timer policy that is associated with the VRF under which the node profile resides automatically gets
applied to this node.
Step 5 To verify the configuration, in the Navigation pane, perform the following steps:
a) Expand Layer 3 Out > External Routed Network_name > Logical Node Profiles > Logical Node
Profiles_name > BGP Protocol Profile. >
b) In the Work pane, the BGP protocol profile that is associated with the node profile is displayed.
Configuring a Per VRF Per Node BGP Timer Using the REST API
The following example shows how to configure Per VRF Per node BGP timer in a node. Configure bgpProtP
under l3extLNodeP configuration. Under bgpProtP, configure a relation (bgpRsBgpNodeCtxPol) to the desired
BGP Context Policy (bgpCtxPol).
Procedure
Configure a node specific BGP timer policy on node1, and configure node2 with a BGP timer policy that is
not node specific.
Example:
POST https://apic-ip-address/mo.xml
In this example, node1 gets BGP timer values from policy pol2, and node2 gets BGP timer values from pol1.
The timer values are applied to the bgpDom corresponding to VRF tn1:ctx1. This is based upon the BGP
timer policy that is chosen following the algorithm described in the Per VRF Per Node BPG Timer Values
section.
Deleting a Per VRF Per Node BGP Timer Using the REST API
The following example shows how to delete an existing Per VRF Per node BGP timer in a node.
Procedure
The code phrase <bgpProtP name="protp1" status="deleted" > in the example above, deletes the BGP
timer policy. After the deletion, node1 defaults to the BGP timer policy for the VRF with which node1 is
associated, which is pol1 in the above example.
Configuring a Per VRF Per Node BGP Timer Policy Using the NX-OS Style CLI
Procedure
Step 2 Create a timer policy. The specific values are provided as examples
only.
Example:
apic1# config
apic1(config)# leaf 101
apic1(config-leaf)# template bgp timers
pol7 tenant tn1
This template will be available on all
nodes where tenant tn1 has a VRF
deployment
apic1(config-bgp-timers)# timers bgp 120
240
apic1(config-bgp-timers)#
graceful-restart stalepath-time 500
apic1(config-bgp-timers)# maxas-limit
300
apic1(config-bgp-timers)# exit
apic1(config-leaf)# exit
apic1(config)# exit
apic1#
exit
exit
exit
apic1#
associated with the BGP timer protocol pol1 and under out2, node1 is associated with a different BGP timer
protocol pol2. This will raise a fault.
tn1
ctx1
out1
ctx1
node1
protp pol1
out2
ctx1
node1
protp pol2
If such a fault is raised, change the configuration to remove the conflict between the BGP timer policies.
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections
to external routers, or multipod connections through an Inter-Pod Network (IPN), it is critical that the MTU
is set appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max packet size to be set as 9216
bytes for ACI and 9000 for NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Procedure
For each of these BFD configurations, you can choose to use the default policy or create a new one for a
specific switch (or set of switches).
Note By default, the APIC controller creates default policies when the system comes up. These default
policies are global, bi-directional forwarding detection (BFD) configuration polices. You can set
attributes within that default global policy in the Work pane, or you can modify these default policy
values. However, once you modify a default global policy, note that your changes affect the entire
system (all switches). If you want to use a specific configuration for a particular switch (or set of
switches) that is not the default, create a switch profile as described in the next step.
Step 3 To create a switch profile for a specific global BFD policy (which is not the default), in the Navigation pane,
expand the Switch Policies > Profiles > Leaf Profiles.
The Profiles - Leaf Profiles screen appears in the Work pane.
Step 4 On the right side of the Work pane, under ACTIONS, select Create Leaf Profile.
The Create Leaf Profile dialog box appears.
Step 5 In the Create Leaf Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the leaf switch profile.
b) In the Description field, enter a description of the profile. (This step is optional.)
c) In the Switch Selectors field, enter the appropriate values for Name (name the switch), Blocks (select
the switch), and Policy Group (select Create Access Switch Policy Group).
The Create Access Switch Policy Group dialog box appears where you can specify the Policy Group
identity properties.
Step 6 In the Create Access Switch Policy Group dialog box, perform the following actions:
a) In the Name field, enter a name for the policy group.
b) In the Description field, enter a description of the policy group. (This step is optional.)
c) Choose a BFD policy type (BFD IPV4 Policy or BFD IPV6 Policy), then select a value (default or
Create BFD Global Ipv4 Policy for a specific switch or set of switches).
Step 7 Click SUBMIT.
Another way to create a BFD global policy is to right-click on either BFD IPV4 or BFD IPV6 in the Navigation
pane.
Step 8 To view the BFD global configuration you created, in the Navigation pane, expand the Switch Policies >
Policies > BFD.
Procedure
For each of these BFD configurations, you can choose to use the default policy or create a new one for a
specific switch (or set of switches).
Note By default, the APIC controller creates default policies when the system comes up. These default
policies are global, bi-directional forwarding detection (BFD) configuration polices. You can set
attributes within that default global policy in the Work pane, or you can modify these default policy
values. However, once you modify a default global policy, note that your changes affect the entire
system (all switches). If you want to use a specific configuration for a particular switch (or set of
switches) that is not the default, create a switch profile as described in the next step.
Step 3 To create a spine switch profile for a specific global BFD policy (which is not the default), in the Navigation
pane, expand the Switch Policies > Profiles > Spine Profiles.
The Profiles- Spine Profiles screen appears in the Work pane.
Step 4 On the right side of the Work pane, under ACTIONS, select Create Spine Profile.
The Create Spine Profile dialog box appears.
Step 5 In the Create Spine Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the switch profile.
b) In the Description field, enter a description of the profile. (This step is optional.)
c) In the Spine Selectors field, enter the appropriate values for Name (name the switch), Blocks (select the
switch), and Policy Group (select Create Spine Switch Policy Group).
The Create Spine Switch Policy Group dialog box appears where you can specify the Policy Group
identity properties.
Step 6 In the Create Spine Switch Policy Group dialog box, perform the following actions:
Configuring BFD Globally on Leaf Switch Using the NX-OS Style CLI
Procedure
Step 1 To configure the BFD IPV4 global configuration (bfdIpv4InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ip bfd_ipv4_global_policy
apic1(config-bfd)# [no] echo-address 1.2.3.4
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 2 To configure the BFD IPV6 global configuration (bfdIpv6InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ipv6 bfd_ipv6_global_policy
apic1(config-bfd)# [no] echo-address 34::1/64
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 3 To configure access leaf policy group (infraAccNodePGrp) and inherit the previously created BFD global
policies using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template leaf-policy-group test_leaf_policy_group
apic1(config-leaf-policy-group)# [no] inherit bfd ip bfd_ipv4_global_policy
apic1(config-leaf-policy-group)# [no] inherit bfd ipv6 bfd_ipv6_global_policy
apic1(config-leaf-policy-group)# exit
Step 4 To associate the previously created leaf policy group onto a leaf using the NX-OS CLI:
Example:
Configuring BFD Globally on Spine Switch Using the NX-OS Style CLI
Use this procedure to configure BFD globally on spine switch using the NX-OS style CLI.
Procedure
Step 1 To configure the BFD IPV4 global configuration (bfdIpv4InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ip bfd_ipv4_global_policy
apic1(config-bfd)# [no] echo-address 1.2.3.4
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 2 To configure the BFD IPV6 global configuration (bfdIpv6InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ipv6 bfd_ipv6_global_policy
apic1(config-bfd)# [no] echo-address 34::1/64
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 3 To configure spine policy group and inherit the previously created BFD global policies using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template spine-policy-group test_spine_policy_group
apic1(config-spine-policy-group)# [no] inherit bfd ip bfd_ipv4_global_policy
apic1(config-spine-policy-group)# [no] inherit bfd ipv6 bfd_ipv6_global_policy
apic1(config-spine-policy-group)# exit
Step 4 To associate the previously created spine policy group onto a spine switch using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# spine-profile test_spine_profile
apic1(config-spine-profile)# spine-group test_spine_group
apic1(config-spine-group)# spine-policy-group test_spine_policy_group
apic1(config-spine-group)# spine 103-104
apic1(config-leaf-group)# exit
Procedure
The following REST API shows the global configuration for bidirectional forwarding detection (BFD):
Example:
<polUni>
<infraInfra>
<bfdIpv4InstPol name="default" echoSrcAddr="1.2.3.4" slowIntvl="1000" minTxIntvl="150"
minRxIntvl="250" detectMult="5" echoRxIntvl="200"/>
<bfdIpv6InstPol name="default" echoSrcAddr="34::1/64" slowIntvl="1000" minTxIntvl="150"
minRxIntvl="250" detectMult="5" echoRxIntvl="200"/>
</infraInfra>
</polUni>
Procedure
Step 5 Under Nodes And Interfaces Protocol Profiles, at the bottom of the Create Routed Outside dialog box,
click the "+" (expand) button.
The Create Node Profile dialog box appears.
Step 6 Under Specify the Node Profile, enter the name of the node profile in the Name field.
Step 7 Click the "+" (expand) button located to the right of the Nodes field.
The Select Node dialog box appears.
Step 8 Under Select node and Configure Static Routes, select a node in the Node ID field.
Step 9 Enter the router ID in the Router ID field.
Step 10 Click OK.
The Create Node Profile dialog box appears.
Step 11 Click the "+" (expand) button located to the right of the Interface Profiles field.
The Create Interface Profile dialog box appears.
Step 12 Enter the name of the interface profile in the Name field.
Step 13 Select the desired user interface for the node you previously created, by clicking one of the Interfaces tabs:
• Routed Interfaces
• SVI
• Routed Sub-Interfaces
Step 14 In the BFD Interface Profile field, enter BFD details. In the Authentication Type field, choose No
authentication or Keyed SHA1. If you choose to authenticate (by selecting Keyed SHA1), enter the
Authentication Key ID, enter the Authentication Key (password), then confirm the password by re-entering
it next to Confirm Key.
Step 15 For the BFD Interface Policy field, select either the common/default configuration (the default BFD policy),
or create your own BFD policy by selecting Create BFD Interface Policy.
If you select Create BFD Interface Policy, the Create BFD Interface Policy dialog box appears where you
can define the BFD interface policy values.
Step 16 Click SUBMIT.
Step 17 To see the configured interface level BFD policy, navigate to Networking > Protocol Polices > BFD.
Procedure
Step 1 To configure BFD Interface Policy (bfdIfPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# tenant t0
apic1(config-tenant)# vrf context v0
apic1(config-tenant-vrf)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant t0 vrf v0
apic1(config-leaf-vrf)# exit
Step 2 To inherit the previously created BFD interface policy onto a L3 interface with IPv4 address using the NX-OS
CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface Ethernet 1/15
apic1(config-leaf-if)# bfd ip tenant mode
apic1(config-leaf-if)# bfd ip inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ip authentication keyed-sha1 key 10 key password
Step 3 To inherit the previously created BFD interface policy onto an L3 interface with IPv6 address using the NX-OS
CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface Ethernet 1/15
apic1(config-leaf-if)# ipv6 address 2001::10:1/64 preferred
apic1(config-leaf-if)# bfd ipv6 tenant mode
apic1(config-leaf-if)# bfd ipv6 inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ipv6 authentication keyed-sha1 key 10 key password
Step 4 To configure BFD on a VLAN interface with IPv4 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 15
apic1(config-leaf-if)# vrf member tenant t0 vrf v0
apic1(config-leaf-if)# bfd ip tenant mode
apic1(config-leaf-if)# bfd ip inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ip authentication keyed-sha1 key 10 key password
Step 5 To configure BFD on a VLAN interface with IPv6 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 15
apic1(config-leaf-if)# ipv6 address 2001::10:1/64 preferred
apic1(config-leaf-if)# vrf member tenant t0 vrf v0
apic1(config-leaf-if)# bfd ipv6 tenant mode
Procedure
The following REST API shows the interface override configuration for bidirectional forwarding detection
(BFD):
Example:
<fvTenant name="ExampleCorp">
<bfdIfPol name=“bfdIfPol" minTxIntvl="400" minRxIntvl="400" detectMult="5" echoRxIntvl="400"
echoAdminSt="disabled"/>
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]"
ifInstT='l3-port' addr="10.0.0.1/24" mtu="1500"/>
<bfdIfP type=“sha1” key=“password">
<bfdRsIfPol tnBfdIfPolName=‘bfdIfPol'/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Note These four consumer protocols are located in the left navigation pane under Tenant > Networking > Protocol
Policies.
Procedure
Step 4 Enter a name in the Name field and provide values in the remaining fields to define the BGP peer prefix
policy.
Step 5 Click SUBMIT.
The BGP peer prefix policy you created now appears under BGP Peer Prefix in the left navigation pane.
Step 6 In the Navigation pane, go back to Networking > External Routed Networks.
Step 7 Right-click on External Routed Networks and select Create Routed Outside.
The Create Routed Outside dialog box appears.
Step 8 In the Create Routed Outside dialog box, enter the name in the Name field. Then, to the right side of the
Name field, select the BGP protocol.
Step 9 In the Nodes and Interfaces Protocol Profiles section , click the "+" (expand) button.
The Create Node Profile dialog box appears.
Step 10 In the BGP Peer Connectivity section, click the "+" (expand) button.
The Create BGP Peer Connectivity Profile dialog box appears.
Step 11 In the Create BGP Peer Connectivity Profile dialog box, next to Peer Controls, select Bidirectional
Forwarding Detection to enable BFD on the BGP consumer protocol, shown as follows (or uncheck the box
to disable BFD).
Step 12 To configure BFD in the OSPF protocol, in the Navigation pane, go to Networking > Protocol Policies >
OSPF > OSPF Interface.
Step 13 On the right side of the Work pane, under ACTIONS, select Create OSPF Interface Policy.
The Create OSPF Interface Policy dialog box appears.
Note You can also right-click on OSPF Interface from the left navigation pane and select Create OSPF
Interface Policy to create the policy.
Step 14 Enter a name in the Name field and provide values in the remaining fields to define the OSPF interface policy.
Step 15 In the Interface Controls section of this dialog box, you can enable or disable BFD. To enable it, check the
box next to BFD, which adds a flag to the OSPF consumer protocol, shown as follows (or uncheck the box
to disable BFD).
Step 16 Click SUBMIT.
Step 17 To configure BFD in the EIGRP protocol, in the Navigation pane, go back to Networking > Protocol
Policies > EIGRP > EIGRP Interface.
Step 18 On the right side of the Work pane, under ACTIONS, select Create EIGRP Interface Policy.
The Create EIGRP Interface Policy dialog box appears.
Note You can also right-click on EIRGP Interface from the left navigation pane and select Create
EIGRP Interface Policy to create the policy.
Step 19 Enter a name in the Name field and provide values in the remaining fields to define the OSPF interface policy.
Step 20 In the Control State section of this dialog box, you can enable or disable BFD. To enable it, check the box
next to BFD, which adds a flag to the EIGRP consumer protocol (or uncheck the box to disable BFD).
Step 21 Click SUBMIT.
Step 22 To configure BFD in the Static Routes protocol, in the Navigation pane, go back to Networking > External
Routed Networks > .
Step 23 Right-click on External Routed Networks and select Create Routed Outside.
The Create Routed Outside dialog box appears.
Step 24 In the Define the Routed Outside section, enter values for all the required fields.
Step 25 In the Nodes and Interfaces Protocol Profiles section, click the "+" (expand) button.
The Create Node Profile dialog box appears.
Step 26 In the section Nodes, click the "+" (expand) button.
The Select Node dialog box appears.
Step 27 In the Static Routes section, click the "+" (expand) button.
The Create Static Route dialog box appears. Enter values for the required fields in this section.
Step 28 Next to Route Control, check the box next to BFD to enable (or uncheck the box to disable) BFD on the
specified Static Route.
Step 29 Click OK.
Step 30 To configure BFD in the IS-IS protocol, in the Navigation pane go to Fabric > Fabric Policies > Interface
Policies > Policies > L3 Interface.
Step 31 On the right side of the Work pane, under ACTIONS, select Create L3 Interface Policy.
The Create L3 Interface Policy dialog box appears.
Note You can also right-click on L3 Interface from the left navigation pane and select Create L3
Interface Policy to create the policy.
Step 32 Enter a name in the Name field and provide values in the remaining fields to define the L3 interface policy.
Step 33 To enable BFD ISIS Policy, click Enable.
Step 34 Click SUBMIT.
Procedure
Step 1 To enable BFD on the BGP consumer protocol using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# bgp-fabric
apic1(config-bgp-fabric)# asn 200
apic1(config-bgp-fabric)# exit
apic1(config)# leaf 101
apic1(config-leaf)# router bgp 200
apic1(config-bgp)# vrf member tenant t0 vrf v0
apic1(config-leaf-bgp-vrf)# neighbor 1.2.3.4
apic1(config-leaf-bgp-vrf-neighbor)# [no] bfd enable
Step 2 To enable BFD on the EIGRP consumer protocol using the NX-OS CLI:
Example:
Step 3 To enable BFD on the OSPF consumer protocol using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# spine 103
apic1(config-spine)# interface ethernet 5/3.4
apic1(config-spine-if)# [no] ip ospf bfd enable
Step 4 To enable BFD on the Static Route consumer protocol using the NX-OS CLI:
Example:
Step 5 To enable BFD on IS-IS consumer protocol using the NX-OS CLI:
Example:
Procedure
Step 1 The following example shows the interface configuration for bidirectional forwarding detection (BFD):
Example:
<fvTenant name="ExampleCorp">
<bfdIfPol name=“bfdIfPol" minTxIntvl="400" minRxIntvl="400" detectMult="5" echoRxIntvl="400"
echoAdminSt="disabled"/>
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]"
ifInstT='l3-port' addr="10.0.0.1/24" mtu="1500"/>
<bfdIfP type=“sha1” key=“password">
<bfdRsIfPol tnBfdIfPolName=‘bfdIfPol'/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Step 2 The following example shows the interface configuration for enabling BFD on OSPF and EIGRP:
Example:
BFD on leaf switch
<fvTenant name=“ExampleCorp">
<ospfIfPol name="ospf_intf_pol" cost="10" ctrl="bfd”/>
<eigrpIfPol ctrl="nh-self,split-horizon,bfd"
dn="uni/tn-Coke/eigrpIfPol-eigrp_if_default"
</fvTenant>
Example:
BFD on spine switch
<l3extLNodeP name="bSpine">
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-103/pathep-[eth5/10]"
encap='vlan-4' ifInstT='sub-interface' addr="20.3.10.1/24"/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospf_intf_pol'/>
</ospfIfP>
<bfdIfP name="test" type="sha1" key="hello" status="created,modified">
<bfdRsIfPol tnBfdIfPolName='default' status="created,modified"/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
Step 3 The following example shows the interface configuration for enabling BFD on BGP:
Example:
<fvTenant name="ExampleCorp">
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]"
ifInstT='l3-port' addr="10.0.0.1/24" mtu="1500">
<bgpPeerP addr="4.4.4.4/24" allowedSelfAsCnt="3" ctrl="bfd" descr=""
name="" peerCtrl="" ttl="1">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="3" descr="" name=""/>
</bgpPeerP>
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Step 4 The following example shows the interface configuration for enabling BFD on Static Routes:
Example:
BFD on leaf switch
<fvTenant name="ExampleCorp">
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2">
<ipRouteP ip=“192.168.3.4" rtCtrl="bfd">
<ipNexthopP nhAddr="192.168.62.2"/>
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/3]"
ifInstT='l3-port' addr="10.10.10.2/24" mtu="1500" status="created,modified" />
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Example:
BFD on spine switch
<l3extLNodeP name="bSpine">
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-103/pathep-[eth5/10]"
encap='vlan-4' ifInstT='sub-interface' addr="20.3.10.1/24"/>
</l3extLNodeP>
Step 5 The following example shows the interface configuration for enabling BFD on IS-IS:
Example:
<fabricInst>
<l3IfPol name="testL3IfPol" bfdIsis="enabled"/>
<fabricLeafP name="LeNode" >
<fabricRsLePortP tDn="uni/fabric/leportp-leaf_profile" />
<fabricLeafS name="spsw" type="range">
<fabricNodeBlk name="node101" to_="102" from_="101" />
</fabricLeafS>
</fabricLeafP>
<fabricLePortP name="leaf_profile">
<fabricLFPortS name="leafIf" type="range">
<fabricPortBlk name="spBlk" fromCard="1" fromPort="49" toCard="1" toPort="49" />
<fabricRsLePortPGrp tDn="uni/fabric/funcprof/leportgrp-LeTestPGrp" />
</fabricLFPortS>
</fabricLePortP>
<fabricSpPortP name="spine_profile">
<fabricSFPortS name="spineIf" type="range">
<fabricPortBlk name="spBlk" fromCard="5" fromPort="1" toCard="5" toPort="2" />
<fabricRsSpPortPGrp tDn="uni/fabric/funcprof/spportgrp-SpTestPGrp" />
</fabricSFPortS>
</fabricSpPortP>
<fabricFuncP>
<fabricLePortPGrp name = "LeTestPGrp">
<fabricRsL3IfPol tnL3IfPolName="testL3IfPol"/>
</fabricLePortPGrp>
</fabricFuncP>
</fabricInst>
IPv4 and IPv6 protocols are supported on the same interface (dual stack) but it is necessary to create two
separate interface profiles.
Layer 3 Outside connections are supported for the routed interfaces, routed sub-interfaces, and SVIs. The
SVIs are used when there is a need to share the physical connect for both L2 and L3 traffic. The SVIs are
supported on ports, port-channels, and VPC port-channels.
Figure 15: OSPF Layer3 Out Connections
When an SVI is used for an Layer 3 Outside connection, an external bridge domain is created on the border
leaf switches. The external bridge domain allows connectivity between the two VPC switches across the ACI
fabric. This allows both the VPC switches to establish the OSPF adjacencies with each other and the external
OSPF device.
When running OSPF over a broadcast network, the time to detect a failed neighbor is the dead time interval
(default 40 seconds). Reestablishing the neighbor adjacencies after a failure may also take longer due to
designated router (DR) election.
Note A link or port-channel failure to one VPC Node does not cause an OSPF adjacency to go down. The OSPF
adjacency can stay up via the external BD accessible through the other VPC node.
Creating an OSPF External Routed Network for Management Tenant Using the
GUI
• You must verify that the router ID and the logical interface profile IP address are different and do not
overlap.
• The following steps are for creating an OSPF external routed network for a management tenant. To create
an OSPF external routed network for a tenant, you must choose a tenant and create a VRF for the tenant.
• For more details, see Cisco APIC and Transit Routing.
Procedure
h) From the External Routed Domain drop-down list, choose the appropriate domain.
i) Click the + icon for Nodes and Interfaces Protocol Profiles area.
Step 5 In the Create Node Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the node profile. (borderLeaf).
b) In the Nodes field, click the + icon to display the Select Node dialog box.
c) In the Node ID field, from the drop-down list, choose the first node. (leaf1).
d) In the Router ID field, enter a unique router ID.
e) Uncheck the Use Router ID as Loopback Address field.
Note By default, the router ID is used as a loopback address. If you want them to be different,
uncheck the Use Router ID as Loopback Address check box.
f) Expand Loopback Addresses, and enter the IP address in the IP field. Click Update, and click OK.
Enter the desired IPv4 or IPv6 IP address.
g) In the Nodes field, expand the + icon to display the Select Node dialog box.
Note You are adding a second node ID.
h) In the Node ID field, from the drop-down list, choose the next node. (leaf2).
k) Expand Loopback Addresses, and enter the IP address in the IP field. Click Update, and click OK.
Click OK.
Enter the desired IPv4 or IPv6 IP address.
Step 6 In the Create Node Profile dialog box, in the OSPF Interface Profiles area, click the + icon.
Step 7 In the Create Interface Profile dialog box, perform the following tasks:
a) In the Name field, enter the name of the profile (portProf).
b) In the Interfaces area, click the Routed Interfaces tab, and click the + icon.
c) In the Select Routed Interfaces dialog box, in the Path field, from the drop-down list, choose the first
port (leaf1, port 1/40).
d) In the IP Address field, enter an IP address and mask. Click OK.
e) In the Interfaces area, click the Routed Interfaces tab, and click the + icon.
f) In the Select Routed Interfaces dialog box, in the Path field, from the drop-down list, choose the second
port (leaf2, port 1/40).
g) In the IP Address field, enter an IP address and mask. Click OK.
Note This IP address should be different from the IP address you entered for leaf1 earlier.
Creating an OSPF External Routed Network for a Tenant Using the NX-OS CLI
Configuring external routed network connectivity involves the following steps:
1. Create a VRF under Tenant.
2. Configure L3 networking configuration for the VRF on the border leaf switches, which are connected to
the external routed network. This configuration includes interfaces, routing protocols (BGP, OSPF,
EIGRP), protocol parameters, route-maps.
3. Configure policies by creating external-L3 EPGs under tenant and deploy these EPGs on the border leaf
switches. External routed subnets on a VRF which share the same policy within the ACI fabric form one
"External L3 EPG" or one "prefix EPG".
The following steps are for creating an OSPF external routed network for a tenant. To create an OSPF external
routed network for a tenant, you must choose a tenant and then create a VRF for the tenant.
Note The examples in this section show how to provide external routed connectivity to the "web" epg in the
"OnlineStore" application for tenant "exampleCorp".
Procedure
Step 2 Configure the tenant VRF and enable policy enforcement on the VRF.
Example:
apic1(config)# tenant exampleCorp
apic1(config-tenant)# vrf context
exampleCorp_v1
apic1(config-tenant-vrf)# contract enforce
apic1(config-tenant-vrf)# exit
Step 3 Configure the tenant BD and mark the gateway IP as “public”. The entry "scope public" makes this gateway
address available for advertisement through the routing protocol for external-L3 network.
Example:
Step 5 Configure the OSPF area and add the route map.
Example:
Step 6 Assign the VRF to the interface (sub-interface in this example) and enable the OSPF area.
Example:
Note For the sub-interface configuration, the main interface (ethernet 1/11 in this example) must be
converted to an L3 port through “no switchport” and assigned a vlan-domain (dom_exampleCorp
in this example) that contains the encapsulation VLAN used by the sub-interface. In the sub-interface
ethernet1/11.500, 500 is the encapsulation VLAN.
Step 7 Configure the external-L3 EPG policy. This includes the subnet to match for identifying the external subnet
and consuming the contract to connect with the epg "web".
Example:
Creating OSPF External Routed Network for Management Tenant Using REST
API
• You must verify that the router ID and the logical interface profile IP address are different and do not
overlap.
• The following steps are for creating an OSPF external routed network for a management tenant. To create
an OSPF external routed network for a tenant, you must choose a tenant and create a VRF for the tenant.
• For more details, see Cisco APIC and Transit Routing.
Procedure
<fvTenant name="mgmt">
<fvBD name="bd1">
<fvRsBDToOut tnL3extOutName="RtdOut" />
<fvSubnet ip="1.1.1.1/16" />
<fvSubnet ip="1.2.1.1/16" />
<fvSubnet ip="40.1.1.1/24" scope="public" />
<fvRsCtx tnFvCtxName="inb" />
</fvBD>
<fvCtx name="inb" />
<l3extOut name="RtdOut">
<l3extRsL3DomAtt tDn="uni/l3dom-extdom"/>
<l3extInstP name="extMgmt">
</l3extInstP>
<l3extLNodeP name="borderLeaf">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="10.10.10.10"/>
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-102" rtrId="10.10.10.11"/>
<l3extLIfP name='portProfile'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]"
ifInstT='l3-port' addr="192.168.62.1/24"/>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/40]"
ifInstT='l3-port' addr="192.168.62.5/24"/>
<ospfIfP/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsEctx tnFvCtxName="inb"/>
<ospfExtP areaId="57" />
</l3extOut>
</fvTenant>
You can configure EIGRP to perform automatic summarization of subnet routes (route summarization) into
network-level routes. For example, you can configure subnet 131.108.1.0 to be advertised as 131.108.0.0 over
interfaces that have subnets of 192.31.7.0 configured. Automatic summarization is performed when there are
two or more network router configuration commands configured for the EIGRP process. By default, this
feature is enabled. For more information, see Route Summarization.
Supported Features
The following features are supported:
• IPv4 and IPv6 routing
• Virtual routing and forwarding (VRF) and interface controls for each address family
• Redistribution with OSPF across nodes
• Default route leak policy per VRF
• Passive interface and split horizon support
• Route map control for setting tag for exported routes
• Bandwidth and delay configuration options in an EIGRP interface policy
Unsupported Features
The following features are not supported:
• Stub routing
• EIGRP used for BGP connectivity
• Multiple EIGRP L3extOuts on the same node
• Authentication support
• Summary prefix
• EIGRP Address Family Context Policy (eigrpCtxAfPol)—contains the configuration for a given
address family in a given VRF. An eigrpCtxAfPol is configured under tenant protocol policies and can
be applied to one or more VRFs under the tenant. An eigrpCtxAfPol can be enabled on a VRF through
a relation in the VRF-per-address family. If there is no relation to a given address family, or the specified
eigrpCtxAfPol in the relation does not exist, then the default VRF policy created under the common tenant
is used for that address family.
The following configurations are allowed in the eigrpCtxAfPol:
• Administrative distance for internal route
• Administrative distance for external route
• Maximum ECMP paths allowed
• Active timer interval
• Metric version (32-bit / 64-bit metrics)
Step 6 To apply the context policy on a VRF, in the Navigation pane, expand Networking > VRFs.
Step 7 Choose the appropriate VRF, and in the Work pane under Properties, expand EIGRP Context Per Address
Family.
Step 8 In the EIGRP Address Family Type drop-down list, choose an IP version.
Step 9 In the EIGRP Address Family Context drop-down list, choose the context policy. Click Update, and Click
Submit.
Step 10 To enable EIGRP within the Layer 3 Out, in the Navigation pane, click Networking > External Routed
Networks, and click the desired Layer 3 outside network.
Step 11 In the Work pane under Properties, check the checkbox for EIGRP, and enter the EIGRP Autonomous
System number. Click Submit.
Step 12 To create an EIGRP interface policy, in the Navigation pane, click Networking > Protocol Policies > EIGRP
Interface and perform the following actions:
a) Right-click EIGRP Interface, and click Create EIGRP Interface Policy.
b) In the Create EIGRP Interface Policy dialog box, in the Name field, enter a name for the policy.
c) In the Control State field, check the desired checkboxes to enable one or multiple controls.
d) In the Hello Interval (sec) field, choose the desired interval.
e) In the Hold Interval (sec) field, choose the desired interval. Click Submit.
f) In the Bandwidth field, choose the desired bandwidth.
g) In the Delay field, choose the desired delay in tens of microseconds or pico seconds.
In the Work pane, the details for the EIGRP interface policy are displayed.
Step 13 In the Navigation pane, click the appropriate external routed network where EIGRP was enabled, expand
Logical Node Profiles and perform the following actions:
a) Expand an appropriate node and an interface under that node.
b) Right-click the interface and click Create EIGRP Interface Profile.
c) In the Create EIGRP Interface Profile dialog box, in the EIGRP Policy field, choose the desired EIGRP
interface policy. Click Submit.
Note The EIGRP VRF policy and EIGRP interface policies define the properties used when EIGRP is
enabled. EIGRP VRF policy and EIGRP interface policies are also available as default policies if
the user does not want to create new policies. So, if a user does not explicitly choose either one of
the policies, the default policy is automatically utilized when EIGRP is enabled.
Example:
apic1# configure
Step 8 Configure the EIGRP VLAN interface and enable EIGRP in the interface:
Example:
apic1(config-leaf)# interface vlan 1013
apic1(config-leaf-if)# show run
# Command: show running-config leaf 101 interface vlan 1013
# Time: Tue Feb 16 09:46:59 2016
leaf 101
interface vlan 1013
vrf member tenant tenant1 vrf l3out
ip address 101.13.1.2/24
ip router eigrp default
ipv6 address 101:13::1:2/112 preferred
ipv6 router eigrp default
ipv6 link-local fe80::101:13:1:2
exit
router eigrp default
exit
router eigrp default
vrf member tenant tenant1 vrf l3out
autonomous-system 1001 l3out l3out-L1
address-family ipv6 unicast
inherit eigrp vrf-policy tenant1
exit
address-family ipv4 unicast
inherit eigrp vrf-policy tenant1
exit
exit
exit
IPv6:
<polUni>
<fvTenant name="cisco_6">
<fvCtx name="dev">
<fvRsCtxToEigrpCtxAfPol tnEigrpCtxAfPolName="eigrp_ctx_pol_v6" af="ipv6-ucast"/>
</fvCtx>
</fvTenant>
</polUni>
IPv6
<polUni>
<fvTenant name="cisco_6">
<l3extOut name="ext">
<eigrpExtP asn="4001"/>
<l3extLNodeP name="node1">
<l3extLIfP name="intf_v6">
<l3extRsPathL3OutAtt addr="2001::1/64" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/4]"/>
<eigrpIfP name="eigrp_ifp_v6">
<eigrpRsIfPol tnEigrpIfPolName="eigrp_if_pol_v6"/>
</eigrpIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
</polUni>
<l3extLIfP name="intf_v6">
The bandwidth (bw) attribute is defined in Kbps. The delayUnit attribute can be "tens of micro" or "pico".
Route Summarization
Route summarization simplifies route tables by replacing many specific addresses with an single address. For
example, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24 can be replaced with 10.1.0.0/16. Route summarization
policies enable routes to be shared efficiently among border leaf switches and their neighboring leaf switches.
BGP, OSPF, or EIGRP route summarization policies are applied to a bridge domain or transit subnet. For
OSPF, inter-area and external route summarization are supported. Summary routes are exported; they are not
advertised within the fabric.
Step 1 Configure BGP route summarization using the REST API as follows:
Example:
<fvTenant name="common">
<fvCtx name="vrf1"/>
<bgpRtSummPol name=“bgp_rt_summ” cntrl=‘as-set'/>
<l3extOut name=“l3_ext_pol” >
<l3extLNodeP name="bLeaf">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId=“20.10.1.1"/>
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/31]"
ifInstT=‘l3-port’ addr=“10.20.1.3/24/>
</l3extLIfP>
</l3extLNodeP>
<bgpExtP />
<l3extInstP name="InstP" >
<l3extSubnet ip="10.0.0.0/8" scope=“export-rtctrl">
<l3extRsSubnetToRtSumm tDn=“uni/tn-common/bgprtsum-bgp_rt_summ”/>
<l3extRsSubnetToProfile tnRtctrlProfileName=“rtprof"/>
</l3extSubnet>
</l3extInstP>
<l3extRsEctx tnFvCtxName=“vrf1”/>
</l3extOut>
</fvTenant>
Step 2 Configure OSPF inter-area and external summarization using the following REST API:
Example:
<l3extLIfP name="intf-1">
<l3extRsPathL3OutAtt addr="20.1.5.2/24" encap="vlan-1001" ifInstT="sub-interface"
tDn="topology/pod-1/paths-101/pathep-[eth1/33]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="l3InstP1">
<fvRsProv tnVzBrCPName="default"/>
<!--Ospf External Area route summarization-->
<l3extSubnet aggregate="" ip="193.0.0.0/8" name="" scope="export-rtctrl">
<l3extRsSubnetToRtSumm tDn="uni/tn-t20/ospfrtsumm-ospfext"/>
</l3extSubnet>
</l3extInstP>
<ospfExtP areaCost="1" areaCtrl="redistribute,summary" areaId="backbone"
areaType="regular"/>
</l3extOut>
<!-- L3OUT Regular Area-->
<l3extOut enforceRtctrl="export" name="l3_2">
<l3extRsEctx tnFvCtxName="ctx0"/>
<l3extLNodeP name="node-101">
<l3extRsNodeL3OutAtt rtrId="20.1.3.2" rtrIdLoopBack="no" tDn="topology/pod-1/node-101"/>
<l3extLIfP name="intf-2">
<l3extRsPathL3OutAtt addr="20.1.2.2/24" encap="vlan-1014" ifInstT="sub-interface"
tDn="topology/pod-1/paths-101/pathep-[eth1/11]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP matchT="AtleastOne" name="l3InstP2">
<fvRsCons tnVzBrCPName="default"/>
<!--Ospf Inter Area route summarization-->
<l3extSubnet aggregate="" ip="197.0.0.0/8" name="" scope="export-rtctrl">
<l3extRsSubnetToRtSumm tDn="uni/tn-t20/ospfrtsumm-interArea"/>
</l3extSubnet>
</l3extInstP>
<ospfExtP areaCost="1" areaCtrl="redistribute,summary" areaId="0.0.0.57"
areaType="regular"/>
</l3extOut>
</fvTenant>
<fvTenant name="exampleCorp">
<l3extOut name="out1">
<l3extInstP name="eigrpSummInstp" >
<l3extSubnet aggregate="" descr="" ip="197.0.0.0/8" name="" scope="export-rtctrl">
<l3extRsSubnetToRtSumm/>
</l3extSubnet>
</l3extInstP>
</l3extOut>
<eigrpRtSummPol name="pol1" />
Note There is no route summarization policy to be configured for EIGRP. The only configuration needed
for enabling EIGRP summarization is the summary subnet under the InstP.
Step 1 Configure BGP route summarization using the NX-OS CLI as follows:
a) Enable BGP as follows:
Example:
apic1(config)# pod 1
apic1(config-pod)# bgp fabric
apic1(config-pod-bgp)# asn 10
apic1(config-pod)# exit
apic1(config)# leaf 101
apic1(config-leaf)# router bgp 10
Step 2 Configure OSPF external summarization using the NX-OS CLI as follows:
Example:
Step 3 Configure OSPF inter-area summarization using the NX-OS CLI as follows:
Note There is no route summarization policy to be configured for EIGRP. The only configuration needed
for enabling EIGRP summarization is the summary subnet under the InstP.
Procedure
Example:
• Enter an IP address in the IP Address field.
• Check the check box next to Export Route Control Subnet.
• Check the check box next to External Subnets for the External EPG.
• From the BGP Route Summarization Policy drop-down menu, select either default for an existing
(default) policy or Create BGP route summarization policy to create a new policy.
• If you selected Create BGP route summarization policy, the Create BGP Route Summarization
Policy dialog box appears. Enter a name for it in the Name field, check the Control State check box for
Generate AS-SET information, click SUBMIT, click OK, click OK, click FINISH.
Step 2 Configure OSPF inter-area and external summarization using GUI as follows:
a) On the menu bar, choose Tenants > common
b) In the Navigation pane, expand Networking > External Routed Networks > Networks
c) In the work pane, click the + sign above Route Summarization Policy.
The Create Subnet dialog box appears.
d) In the Specify the Subnet dialog box, you can associate a route summarization policy to the subnet as
follows:
Example:
• Enter an IP address in the IP Address field.
• Check the check box next to Export Route Control Subnet.
• Check the check box next to External Subnets for the External EPG.
• From the OSPF Route Summarization Policy drop-down menu, choose either default for an existing
(default) policy or Create OSPF route summarization policy to create a new policy.
• If you chose Create OSPF route summarization policy, the Create OSPF Route Summarization
Policy dialog box appears. Enter a name for it in the Name field, check the check box next to Inter-Area
Enabled, enter a value next to Cost, click SUBMIT.
Note When explicit prefix list is used, the type of the route profile should be set to "match routing policy only".
After the match and set profiles are defined, the route map must be created in the Layer 3 Out. Route maps
can be created using one of the following methods:
• Create a "default-export" route map for export route control, and a "default-import" route map for import
route control.
• Create other route maps (not named default-export or default-import) and setup the relation from one or
more l3extInstPs or subnets under the l3extInstP.
• In either case, match the route map on explicit prefix list by pointing to the rtctrlSubjP within the route
map.
In the export and import route map, the set and match rules are grouped together along with the relative
sequence across the groups (rtctrlCtxP). Additionally, under each group of match and set statements (rtctrlCtxP)
the relation to one or more match profiles are available (rtctrlSubjP).
Any protocol enabled on Layer 3 Out (for example BGP protocol), will use the export and import route map
for route filtering.
Note The subnet in the BD must be marked public for the subnet to be advertised out.
• Specifying a subnet in the l3extInstP with export/import route control for advertising transit and external
networks.
Explicit prefix list is defined through a new match type that is called match route destination
(rtctrlMatchRtDest). An example usage is provided in the API example that follows.
Additional information about match rules, set rules when using explicit prefix list are as follows:
Match Rules
• Under the tenant (fvTenant), you can create match profiles (rtctrlSubjP) for route map filtering. Each
match profile can contain one or more match rules. Match rule supports multiple match types. Prior to
Cisco APIC release 2.1(x), match types supported were explicit prefix list and community list.
Starting with Cisco APIC release 2.1(x), explicit prefix match or match route destination
(rtctrlMatchRtDest) is supported.
Match prefix list (rtctrlMatchRtDest) supports one or more subnets with an optional aggregate flag.
Aggregate flags are used for allowing prefix matches with multiple masks starting with the mask mentioned
in the configuration till the maximum mask allowed for the address family of the prefix . This is the
equivalent of the "le " option in the prefix-list in NX-OS software (example, 10.0.0.0/8 le 32).
The prefix list can be used for covering the following cases:
• Allow all ( 0.0.0.0/0 with aggregate flag, equivalent of 0.0.0.0/0 le 32 )
• One or more of specific prefixes (example: 10.1.1.0/24)
• One or more of prefixes with aggregate flag (example, equivalent of 10.1.1.0/24 le 32).
Note When Allow all (0.0.0.0/0 with aggregate flag) is used in explicit prefix-list for
export route-control, only the routes learned from a routing protocol (such as
BGP, OSPF, or EIGRP) will be advertised. The 0/0 aggregate flag will not
advertise prefixes corresponding to the following:
• Bridge domain (BD) subnets
• Directly Connected Interfaces on the border leaf switch
• Static routes defined on the L3Out
• The explicit prefix match rules can contain one or more subnets, and these subnets can be bridge domain
public subnets or external networks. Subnets can also be aggregated up to the maximum subnet mask
(/32 for IPv4 and /128 for IPv6).
• When multiple match rules of different types are present (such as match community and explicit prefix
match), the match rule is allowed only when the match statements of all individual match types match.
This is the equivalent of the AND filter. The explicit prefix match is contained by the subject profile
(rtctrlSubjP) and will form a logical AND if other match rules are present under the subject profile.
• Within a given match type (such as match prefix list), at least one of the match rules statement must
match. Multiple explicit prefix match (rtctrlMatchRtDest) can be defined under the same subject profile
(rtctrlSubjP) which will form a logical OR.
Set Rules
• Set policies must be created to define set rules that are carried with the explicit prefixes such as set
community, set tag.
• Starting 2.3(x), deny-static implicit entry has been removed from Export Route Map. The user needs to
configure explicitly the permit and deny entries required to control the export of static routes.
• Route-map per peer in an L3Oout is not supported. Route-map can only be applied on L3Out as a whole.
Following are possible workarounds to this issue:
• Block the prefix from being advertised from the other side of the neighbor.
• Block the prefix on the route-map on the existing L3Out where you don't want to learn the prefix,
and move the neighbor to another L3Out where you want to learn the prefix and create a separate
route-map.
• Creating route-maps using a mixture of GUI and API commands is not supported. As a possible
workaround, you can create a route-map different from the default route-map using the GUI, but the
route-map created through the GUI on an L3Out cannot be applied to per-peer.
Configuring a Route Map/Profile with Explicit Prefix List Using the GUI
Before you begin
• Tenant and VRF must be configured.
• The VRF must be enabled on the leaf switch.
Procedure
Step 1 On the menu bar, click Tenant, and in the Navigation pane, expand Tenant_name > Networking > External
Routed Networks > Match Rules for Route Maps.
Step 2 Right click Match Rules for Route Maps, and click Create Match Rule for a Route Map.
Step 3 In the Create Match Rule for a Route Map dialog box, enter a name for the rule and choose the desired
community terms.
Step 4 In the Create Match Rule dialog box, expand Match Prefix and perform the following actions:
a) In the IP field, enter the explicit prefix list.
The explicit prefix can denote a BD subnet or an external network.
b) In the Route Type field, choose Route Destination.
c) Check the Aggregate check box only if you desire an aggregate prefix. Click Update, and click Submit.
The match rule can have one or more of the match destination rules and one or more match community
terms. Across the match types, the AND filter is supported, so all conditions in the match rule must match
for the route match rule to be accepted. When there are multiple match prefixes in Match Destination
Rules, the OR filter is supported. Any one match prefix is accepted as a route type if it matches.
Step 5 Under External Routed Networks, click and choose the available default layer 3 out.
If you desire another layer 3 out, you can choose that instead.
Step 9 Expand the + icon to display the Create Route Control Context dialog box.
Step 10 Enter a name for route control context, and choose the desired options for each field. To deny routes that
match criteria defined in match rule (Step 11), select the action deny. The default action is permit.
Step 11 In the Match Rule field, choose the rule that was created earlier.
Step 12 In the Set Rule field, choose Create Set Rules for a Route Map.
Typically in the route map/profile you have a match and so the prefix list is allowed in and out, but in addition
some attributes are being set for these routes, so that the routes with the attributes can be matched further.
Step 13 In the Create Set Rules for a Route Map dialog box, enter a name for the action rule and check the desired
check boxes. Click Submit.
Step 14 In the Create Route Control Context dialog box, click OK. And in the Create Route Map/Profile dialog
box, click Submit.
This completes the creation of the route map/profile. The route map is a combination of match action rules
and set action rules. The route map is associated with export profile or import profile or redistribute profile
as desired by the user. You can enable a protocol with the route map.
Configuring Route Map/Profile with Explicit Prefix List Using NX-OS Style CLI
Before you begin
• Tenant and VRF must be configured.
• The VRF must be enabled on the leaf switch.
Procedure
Step 3 template route group group-name tenant Creates a route group template.
tenant-name
Step 4 ip prefix permit prefix/masklen [le{32 | 128 Add IP prefix to the route group.
}]
Note The IP prefix can denote a BD
Example: subnet or an external network. Use
apic1(config-route-group)# ip prefix optional argument le 32 for IPv4
permit 15.15.15.0/24 and le 128 for IPv6 if you desire an
aggregate prefix.
Step 6 vrf context tenant tenant-name vrf vrf-name Enters a tenant VRF mode for the node.
Example:
apic1(config-leaf)# vrf context tenant
exampleCorp vrf v1
Step 7 template route-profile profile-name Creates a template containing set actions that
should be applied to the matched routes.
Example:
apic1(config-leaf-vrf)# template
route-profile rp1
Step 8 set metric value Add desired attributes (set actions) to the
template.
Example:
apic1(config-leaf-vrf-template-route-profile)#
set metric 128
Step 11 match route group group-name [order Match a route group that has already been
number] [deny] created, and enter the match mode to configure
the route- profile. Additionally choose the
Example:
keyword Deny if routes matching the match
apic1(config-leaf-vrf-route-map)# match criteria defined in route group needs to be
route group g1 order 1
denied. The default is Permit.
Step 14 route-map map-name {in | out } Configure the route map for a BGP neighbor.
Example:
apic1(config-leaf-bgp-vrf-neighbor)#
route-map bgpMap out
Configuring Route Map/Profile with Explicit Prefix List Using REST API
Before you begin
• Tenant and VRF must be configured.
Procedure
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections
to external routers, or multipod connections through an Inter-Pod Network (IPN), it is critical that the MTU
is set appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max packet size to be set as 9216
bytes for ACI and 9000 for NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Configuring a Route Control Protocol to Use Import and Export Controls, With
the GUI
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network configured using OSPF.
This task lists steps to create import and export policies. By default, import controls are not enforced, so the
import control must be manually assigned.
Procedure
Step 1 On the menu bar, click TENANTS > Tenant_name > Networking > External Routed Networks >
Layer3_Outside_name .
Step 2 Right click Layer3_Outside_name and click Create Route Map.
Step 3 In the Create Route Map dialog box, perform the following actions:
a) From the Name field drop-down list, choose the appropriate route profile.
Depending on your selection, whatever is advertised on the specific outside is automatically used.
b) In the Type field, choose Match Prefix AND Routing Policy.
c) Expand Order.
Step 4 In the Create Route Control Context dialog box, perform the following actions:
a) In the Order field, choose the desired order number.
b) In the Name field, enter a name for the route control private network.
c) From the Match Rule field drop-down list, click Create Match Rule For a Route Map.
d) In the Create Match Rule dialog box, in the Name field, enter a route match rule name. Click Submit.
Specify the match community regular expression term and match community terms as desired. Match
community factors will require you to specify the name, community and scope.
e) From the Set Attribute drop-down list, choose Create Set Rules For a Route Map.
f) In the Create Set Rules For a Route Map dialog box, in the Name field, enter a name for the rule.
g) Check the check boxes for the desired rules you want to set, and choose the appropriate values that are
displayed for the choices. Click Submit.
The policy is created and associated with the action rule.
h) Click OK.
i) In the Create Route Map dialog box, click Submit.
Step 5 In the Navigation pane, choose Route Profile > route_profile_name > route_control_private_network_name.
In the Work pane, under Properties the route profile policy and the associated action rule name are displayed.
Step 6 In the Navigation pane, click the Layer3_Outside_name.
In the Work pane, the Properties are displayed.
Step 7 (Optional) Click the Route Control Enforcement field and check the Import check box to enable the import
policy.
The import control policy is not enabled by default but can be enabled by the user. The import control policy
is supported for BGP and OSPF, but not for EIGRP. If the user enables the import control policy for an
unsupported protocol, it will be automatically ignored. The export control policy is supported for BGP, EIGRP,
and OSPF.
Note If BGP is established over OSPF, then the import control policy is applied only for BGP and ignored
for OSPF.
Step 8 To create a customized export policy, right-click Route Map/Profiles, click Create Route Map, and perform
the following actions:
a) In the Create Route Map dialog box, from the drop-down list in the Name field, choose or enter a
name for the export policy.
b) Expand the + sign in the dialog box.
c) In the Create Route Control Context dialog box, in the Order field, choose a value.
d) In the Name field, enter a name for the route control private network.
e) (Optional) From the Match Rule field drop-down list, choose Create Match Rule For a Route Map,
and create and attach a match rule policy if desired.
f) From the Set Attribute field drop-down list, choose Create Set Rules For a Route Map and click
OK.
Alternatively, if desired, you can choose an existing set action, and click OK
g) In the Create Set Rules For A Route Map dialog box, in the Name field, enter a name.
h) Check the check boxes for the desired rules you want to set, and choose the appropriate values that are
displayed for the choices. Click Submit.
In the Create Route Control Context dialog box, the policy is created and associated with the action
rule.
i) Click OK.
j) In the Create Route Map dialog box, click Submit.
In the Work pane, the export policy is displayed.
Note To enable the export policy, it must first be applied. For the purpose of this example, it is applied
to all the subnets under the network.
Step 9 In the Navigation pane, expand External Routed Networks > External_Routed_Network_name > Networks >
Network_name, and perform the following actions:
a) Expand Route Control Profile.
b) In the Name field drop-down list, choose the policy created earlier.
c) In the Direction field drop-down list, choose Route Control Profile. Click Update.
Configuring a Route Control Protocol to Use Import and Export Controls, With
the NX-OS Style CLI
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network configured using OSPF.
This section describes how to create a route map using the NX-OS CLI:
Procedure
apic1# configure
apic1(config)# leaf 101
# Create community-list
apic1(config-leaf)# template community-list standard CL_1 65536:20 tenant exampleCorp
apic1(config-leaf)# vrf context tenant exampleCorp vrf v1
Note In this case, public-subnets from bd1 and prefixes matching prefix-list p1 are exported out using
route-profile “default-export”, while public-subnets from bd2 are exported out using route-profile
“bd-rtctrl”.
Configuring a Route Control Protocol to Use Import and Export Controls, With
the REST API
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network using OSPF.
Procedure
Configure the route control protocol using import and export controls.
Example:
Overview
This example shows how to configure Common Pervasive Gateway for IPv4 when using the Cisco APIC.
Two ACI fabrics can be configured with an IPv4 common gateway on a per bridge domain basis. Doing so
enables moving one or more virtual machine (VM) or conventional hosts across the fabrics while the host
retains its IP address. VM host moves across fabrics can be done automatically by the VM hypervisor. The
ACI fabrics can be co-located, or provisioned across multiple sites. The Layer 2 connection between the ACI
fabrics can be a local link, or can be across a bridged network. The following figure illustrates the basic
common pervasive gateway topology.
Note Depending upon the topology used to interconnect two Cisco ACI fabrics, it is required that the interconnecting
devices filter out the traffic source with the Virtual MAC address of the gateway switch virtual interface (SVI).
Procedure
a) In the L3 Configurations tab, click the Virtual MAC Address field, and change not-applicable to the
appropriate value. Click Submit.
Note The default BD MAC address values are the same for all ACI fabrics; this configuration requires
the bridge domain MAC values to be unique for each ACI fabric.
Confirm that the bridge domain MAC (pmac) values for each fabric are unique.
Step 7 To create an L2Out EPG to extend the BD to another fabric, in the Navigation pane, right-click External
Bridged Networks and open the Create Bridged Outside dialog box, and perform the following actions:
a) In the Name field, enter a name for the bridged outside.
b) In the Bridge Domain field, select the bridge domain already previously created.
c) In the Encap field, enter the VLAN encapsulation to match the other fabric l2out encapsulation.
d) In the Path Type field, select Port, PC, or VPC to deploy the EPG and click Next.
e) To create an External EPG network click in the Name field, enter a name for the network and you can
specify the QoS class and click Finish to complete Common Pervasive configuration.
Procedure
Procedure
<fvAp name="test">
<fvAEPg name="web">
<fvRsBd tnFvBDName="test"/>
<fvRsPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/3]" encap="vlan-1002"/>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
Procedure
Procedure
Step 5 endpoint ipA.B.C.D/LEN next-hop A.B.C.D Creates an endpoint behind the EPG. The subnet
[scope scope ] mask must be /32 (/128 for IPv6) pointing to
one IP address or one endpoint.
Example:
apic1(config-tenant-app-epg)# endpoint
ip 125.12.1.1/32 next-hop 26.0.14.101
Example
The following example shows the commands to configure an endpoint behind an EPG.
apic1# config
apic1(config)# tenant t1
apic1(config-tenant)# application ap1
apic1(config-tenant-app)# epg ep1
apic1(config-tenant-app-epg)# endpoint ip 125.12.1.1/32 next-hop 26.0.14.101
Procedure
To configure a static route for the BD used in a pervasive gateway, enter a post such as the following example:
Example:
<fvAEPg name="ep1">
<fvRsBd tnFvBDName="bd1"/>
<fvSubnet ip="2002:0db8:85a3:0000:0000:8a2e:0370:7344/128"
ctrl="no-default-gateway" >
<fvEpReachability>
<ipNexthopEpP nhAddr="2001:0db8:85a3:0000:0000:8a2e:0370:7343/128" />
</fvEpReachability>
</fvSubnet>
</fvAEPg>
Note When OSPF is used with BGP peering, OSPF is only used to learn and advertise the routes to the BGP peering
addresses. All route control applied to the Layer 3 Outside Network (EPG) are applied at the BGP protocol
level.
ACI supports a number of features for iBGP and eBGP connectivity to external peers. The BGP features are
configured on the BGP Peer Connectivity Profile.
The BGP peer connectivity profile features are described in the following table:
Local Autonomous System The local AS feature used to local-as xxx <no-prepend> <replace-as>
Number advertise a different AS <dual-as>
number than the AS assigned
to the fabric MP-BGP Route
Reflector Profile. It is only
supported for the EBGP
neighbors and the local AS
number must be different than
the route reflector policy AS.
Step 5 On the menu bar, choose Fabric > Fabric Policies > POD Policies.
Step 6 In the Navigation pane, expand and right-click Policy Groups, and click Create POD Policy Group.
Step 7 In the Create POD Policy Group dialog box, in the Name field, enter the name of a pod policy group.
Step 8 In the BGP Route Reflector Policy drop-down list, choose the appropriate policy (default). Click Submit.
The BGP route reflector policy is associated with the route reflector pod policy group, and the BGP process
is enabled on the leaf switches.
Step 9 In the Navigation pane, choose Pod Policies > Profiles > default. In the Work pane, from the Fabric Policy
Group drop-down list, choose the pod policy that was created earlier. Click Submit.
The pod policy group is now applied to the fabric policy group.
Note In this example, the BGP fabric ASN is 100. Spine switches 104 and 105 are chosen as MP-BGP
route-reflectors.
apic1(config)# bgp-fabric
<bgpInstPol name="default">
<bgpAsP asn="1" />
<bgpRRP>
<bgpRRNodePEp id=“<spine_id1>”/>
<bgpRRNodePEp id=“<spine_id2>”/>
</bgpRRP>
</bgpInstPol>
<fabricFuncP>
<fabricPodPGrp name="bgpRRPodGrp”>
<fabricRsPodPGrpBGPRRP tnBgpInstPolName="default" />
</fabricPodPGrp>
</fabricFuncP>
Example:
For the PodP setup—
POST https://apic-ip-address/api/policymgr/mo/uni.xml
<fabricPodP name="default">
<fabricPodS name="default" type="ALL">
<fabricRsPodPGrp tDn="uni/fabric/funcprof/podpgrp-bgpRRPodGrp"/>
</fabricPodS>
</fabricPodP>
a) Use secure shell (SSH) to log in as an administrator to each leaf switch as required.
b) Enter the show processes | grep bgp command to verify the state is S.
If the state is NR (not running), the configuration was not successful.
Step 2 Verify that the autonomous system number is configured in the spine switches by performing the following
actions:
a) Use the SSH to log in as an administrator to each spine switch as required.
b) Execute the following commands from the shell window
Example:
cd /mit/sys/bgp/inst
Example:
grep asn summary
The configured autonomous system number must be displayed. If the autonomous system number value
displays as 0, the configuration was not successful.
Starting with Cisco APIC release 2.3, it is now possible to choose the behavior when deploying two (or more)
Layer 3 Outs using the same external encapsulation (SVI).
The encapsulation scope can now be configured as Local or VRF:
• Local scope (default): The example behavior is displayed in the figure titled Local Scope Encapsulation
and Two Layer 3 Outs.
• VRF scope: The ACI fabric configures the same bridge domain (VXLAN VNI) across all the nodes and
Layer 3 Out where the same external encapsulation (SVI) is deployed. See the example in the figure
titled VRF Scope Encapsulation and Two Layer 3 Outs.
The mapping among the CLI, API, and GUI syntax is as follows:
Note The CLI commands to configure encapsulation scope are only supported when the VRF is configured through
a named Layer 3 Out configuration.
Procedure
Step 1 On the menu bar, click > Tenants > Tenant_name. In the Navigation pane, click Networking > External
Routed Networks > External Routed Network_name > Logical Node Profiles > Logical Interface Profile.
Step 2 In the Navigation pane, right-click Logical Interface Profile, and click Create Interface Profile.
Step 3 In the Create Interface Profile dialog box, perform the following actions:
a) In the Step 1 Identity screen, in the Name field, enter a name for the interface profile.
b) In the remaining fields, choose the desired options, and click Next.
c) In the Step 2 Protocol Profiles screen, choose the desired protocol profile details, and click Next.
d) In the Step 3 Interfaces screen, click the SVI tab, and click the + icon to open the Select SVI dialog box.
e) In the Specify Interface area, choose the desired values for the various fields.
f) In the Encap Scope field, choose the desired encapsulation scope value. Click OK.
The default value is Local.
Procedure
Step 3 Create the VLAN interface. Creates the VLAN interface. The VLAN range
is 1-4094.
Example:
apic1(config-leaf)# interface vlan 2001
Procedure
</fvTenant>
</polUni>
Note This feature is available in the APIC Release 2.2(3x) release and going forward with APIC Release 3.1(1). It
is not supported in APIC Release 3.0(x).
The Switch Virtual Interface (SVI) represents a logical interface between the bridging function and the routing
function of a VLAN in the device. SVI can have members that are physical ports, direct port channels, or
virtual port channels. The SVI logical interface is associated with VLANs, and the VLANs have port
membership.
The SVI state does not depend on the members. The default auto state behavior for SVI in Cisco APIC is that
it remains in the up state when the auto state value is disabled. This means that the SVI remains active even
if no interfaces are operational in the corresponding VLAN/s.
If the SVI auto state value is changed to enabled, then it depends on the port members in the associated VLANs.
When a VLAN interface has multiple ports in the VLAN, the SVI goes to the down state when all the ports
in the VLAN go down.
Procedure
Step 1 On the menu bar, click > Tenants > Tenant_name. In the Navigation pane, click Networking > External
Routed Networks > External Routed Network_name > Logical Node Profiles > Logical Interface Profile.
Step 2 In the Navigation pane, expand Logical Interface Profile, and click the appropriate logical interface profile.
Step 3 In the Work pane, click the + sign to display the SVI dialog box.
Step 4 To add an additional SVI, in the SVI dialog box, perform the following actions:
a) In the Path Type field, choose the appropriate path type.
b) In the Path field, from the drop-down list, choose the appropriate physical interface.
c) In the Encap field, choose the appropriate values.
d) In the Auto State field, choose the SVI in the Work pane, to view/change the Auto State value.
The default value is Disabled.
Note To verify or change the Auto State value for an existing SVI, choose the appropriate SVI and
verify or change the value.
Procedure
Step 3 Create the VLAN interface. Creates the VLAN interface. The VLAN range
is 1-4094.
Example:
apic1(config-leaf)# interface vlan 2001
Procedure
To disable the autostate, you must change the value to disabled in the above example. For example,
autostate="disabled".
Note All switches that will use l3extInstP EPG shared service contracts require the hardware and software support
available starting with the APIC 1.2(1x) and switch 11.2(1x) releases. Refer to the Cisco APIC Management,
Installation, Upgrade, and Downgrade Guide and Release Notes documentation for more details.
The figure below illustrates the major policy model objects that are configured for a shared l3extInstP EPG.
Take note of the following guidelines and limitations for shared Layer 3 outside network configurations:
• No tenant limitations: Tenants A and B can be any kind of tenant (user, common, infra, mgmt.). The
shared l3extInstP EPG does not have to be in the common tenant.
• Flexible placement of EPGs: EPG A and EPG B in the illustration above are in different tenants. EPG
A and EPG B could use the same bridge domain and VRF, but they are not required to do so. EPG A
and EPG B are in different bridge domains and different VRFs but still share the same l3extInstP EPG.
• A subnet can be private, public, or shared. A subnet that is to be advertised into a consumer or provider
EPG of an L3extOut must be set to shared. A subnet that is to be exported to an L3extOut must be set
to public.
• The shared service contract is exported from the tenant that contains the l3extInstP EPG that provides
shared Layer 3 outside network service. The shared service contract is imported into the tenants that
contain the EPGs that consume the shared service.
• Do not use taboo contracts with a shared L3 out; this configuration is not supported.
• The l3extInstP as a shared service provider is supported, but only with non l3extInstP consumers (where
the L3extOut EPG is the same as the l3extInstP).
• Traffic Disruption (Flap): When an l3instP EPG is configured with an external subnet of 0.0.0.0/0 with
the scope property of the l3instP subnet set to shared route control (shared-rctrl), or shared security
(shared-security), the VRF is redeployed with a global pcTag. This will disrupt all the external traffic in
that VRF (because the VRF is redeployed with a global pcTag).
• Prefixes for a shared L3extOut must to be unique. Multiple shared L3extOut configurations with the
same prefix in the same VRF will not work. Be sure that the external subnets (external prefixes) that are
advertised into a VRF are unique (the same external subnet cannot belong to multiple l3instPs). An
L3extOut configuration (for example, named L3Out1) with prefix1 and a second Layer 3 outside
configuration (for example, named L3Out2) also with prefix1 belonging to the same VRF will not work
(because only 1 pcTag is deployed). Different behaviors of L3extOut are possible when configured on
the same leaf switch under the same VRF. The two possible scenarios are as follows:
• Scenario 1 has an L3extOut with an SVI interface and two subnets (10.10.10.0/24 and 0.0.0.0/0)
defined. If ingress traffic on the Layer 3 outside network has the matching prefix 10.10.10.0/24,
then the ingress traffic uses the External EPG pcTag. If ingress traffic on the Layer 3 Outside network
has the matching default prefix 0.0.0.0/0, then the ingress traffic uses the External Bridge pcTag.
• Scenario 2 has an L3extOut using a routed or routed-sub-interface with two subnets (10.10.10.0/24
and 0.0.0.0/0) defined. If ingress traffic on the Layer 3 outside network has the matching prefix
10.10.10.0/24, then the ingress traffic uses the External EPG pcTag. If ingress traffic on the Layer
3 outside network has the matching default prefix 0.0.0.0/0, then the ingress traffic uses the VRF
pcTag.
• As a result of these described behaviors, the following use cases are possible if the same VRF and
same leaf switch are configured with L3extOut-A and L3extOut-B using an SVI interface:
Case 1 is for L3extOut-A: This External Network EPG has two subnets defined: 10.10.10.0/24 &
0.0.0.0/1. If ingress traffic on L3extOut-A has the matching prefix 10.10.10.0/24, it uses the external
EPG pcTag & contract-A which is associated with L3extOut-A. When egress traffic on L3extOut-A
has no specific match found, but there is a maximum prefix match with 0.0.0.0/1, it uses the External
Bridge Domain (BD) pcTag & contract-A.
Case 2 is for L3extOut-B: This External Network EPG has one subnet defined: 0.0.0.0/0. When
ingress traffic on L3extOut-B has the matching prefix10.10.10.0/24 (which is defined
underL3extOut-A), it uses the External EPG pcTag of L3extOut-A and the contract-A which is tied
with L3extOut-A. It does not use contract-B which is tied with L3extOut-B.
• Traffic not permitted: Traffic is not permitted when an invalid configuration sets the scope of the external
subnet to shared route control (shared-rtctrl) as a subset of a subnet that is set to shared security
(shared-security). For example, the following configuration is invalid:
• shared rtctrl: 10.1.1.0/24, 10.1.2.0/24
• shared security: 10.1.0.0/16
In this case, ingress traffic on a non-border leaf with a destination IP of 10.1.1.1 is dropped, since prefixes
10.1.1.0/24 and 10.1.2.0/24 are installed with a drop rule. Traffic is not permitted. Such traffic can be
enabled by revising the configuration to use the shared-rtctrl prefixes as shared-security prefixes as
well.
• Inadvertent traffic flow: Prevent inadvertent traffic flow by avoiding the following configuration scenarios:
• Case 1 configuration details:
• A Layer 3 outside network configuration (for example, named L3extOut-1) with VRF1 is called
provider1.
• A second Layer 3 outside network configuration (for example, named L3extOut-2) with VRF2
is called provider2.
• L3extOut-1 VRF1 advertises a default route to the Internet, 0.0.0.0/0 which enables both
shared-rtctrl and shared-security.
• L3extOut-2 VRF2 advertises specific subnets to DNS and NTP, 192.0.0.0/8 which enables
shared-rtctrl.
• Variation B: An EPG conforms to the allow_all contract of a second shared Layer 3 outside
network.
• Communications between EPG1 and L3extOut-1 is regulated by an allow_all contract.
• Communications between EPG1 and L3extOut-2 is regulated by an allow_icmp contract.
Result: Traffic from EPG1 to L3extOut-2 to 192.2.x.x conforms to the allow_all contract.
- 192.1.0.0/16 = shared-security
Result: Traffic going from 192.2.x.x also goes through to the EPG.
• Variation B: Unintended traffic goes through an EPG. Traffic coming in a shared L3extOut
can go through the EPG.
- The shared L3extOut VRF has an EPG with pcTag = prov vrf and a contract set to
allow_all
- The EPG <subnet> = shared.
Result: The traffic coming in on the Layer 3 out can go through the EPG.
• Routes of connected and transit subnets for a Layer 3 Out are leaked by enforcing contracts (L3Out-L3Out
as well as L3Out-EPG) and without leaking the dynamic or static routes between VRFs.
• Dynamic or static routes are leaked for a Layer 3 Out by enforcing contracts (L3Out-L3Out as well as
L3Out-EPG) and without advertising directly connected or transit routes between VRFs.
• Shared Layer 3 Outs in different VRFs can communicate with each other.
• There is no associated L3Out required for the bridge domain. When an Inter-VRF shared L3Out is used,
it is not necessary to associate the user tenant bridge domains with the L3Out in tenant common. If you
had a tenant-specific L3Out, it would still be associated to your bridge domains in your respective tenants.
• Two Layer 3 Outs can be in two different VRFs, and they can successfully exchange routes.
• This enhancement is similar to the Application EPG to Layer 3 Out inter-VRF communications. The
only difference is that instead of an Application EPG there is another Layer 3 Out. Therefore, in this
case, the contract is between two Layer 3 Outs.
In the following figure, there are two Layer 3 Outs with a shared subnet. There is a contract between the Layer
3 external instance profile (l3extInstP) in both the VRFs. In this case, the Shared Layer 3 Out for VRF1 can
communicate with the Shared Layer 3 Out for VRF2.
Figure 21: Shared Layer 3 Outs Communicating Between Two VRFs
Configuring Two Shared Layer 3 Outs in Two VRFs Using REST API
The following REST API configuration example that displays how two shared Layer 3 Outs in two VRFs
communicate.
Procedure
<tenant name=“t1_provider”>
<fvCtx name=“VRF1">
<l3extOut name="T0-o1-L3OUT-1">
<l3extRsEctx tnFvCtxName="o1"/>
<ospfExtP areaId='60'/>
<l3extInstP name="l3extInstP-1">
<fvRsProv tnVzBrCPName="vzBrCP-1">
</fvRsProv>
<l3extSubnet ip="192.168.2.0/24" scope=“shared-rtctrl, shared-security"
aggregate=""/>
</l3extInstP>
</l3extOut>
</tenant>
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the NX-OS Style CLI
- Named Example
Procedure
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the NX-OS Style CLI
- Implicit Example
Procedure
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the Advanced GUI
Before you begin
The contract label to be used by the consumer and provider is already created.
Procedure
In this scenario, check the check boxes for Shared Route Control Subnet and Shared Security Import
Subnet.
Step 21 In the Create External Network dialog box, click OK. In the Create Routed Outside dialog box, click
Finish.
Overview
This topic provides a typical example of how to configure an interleak of external routes such as OSPF when
using Cisco APIC.
Interleak from OSPF has been available in earlier releases. The feature now enables the user to set attributes,
such as community, preference, and metric for route leaking from OSPF to BGP.
Procedure
c) In the VRF field, from the drop-down list, choose the appropriate VRF.
d) From the External Routed Domain drop-down list, choose the appropriate external routed domain.
e) Check the check box for the desired protocol.
The options for the purpose of this task are OSPF.
f) In the Route Profile for Interleak field, click Create Route Profile.
Step 3 In the Create Route Profile dialog box, in the Name field, enter a route profile name.
Step 4 In the Type field, you must choose Match Routing Policy Only.
Step 5 Click the + sign to open the Create Route Control Context dialog box and perform the following actions:
a) Populate the Order and the Name fields as desired.
b) In the Set Attribute field, click Create Action Rule Profile.
c) In the Create Action Rule Profile dialog box, in the Name field, enter a name for the action rule profile.
d) Choose the desired attributes, and related community, criteria, tags, and preferences. Click OK.
e) In the Create Route Profile dialog box, click Submit.
Step 6 In the Create Routed Outside dialog box, expand the Nodes and Interfaces Protocol Profiles area.
Step 7 In the Create Node Profile dialog box, specify the node profile. Click OK.
Step 8 In the Create Routed Outside dialog box, click Next.
Step 9 In the External EPG Networks area, expand External EPG Networks.
Step 10 In the Create External Network dialog box, in the Name field, enter a name.
Step 11 Expand Subnet, and in the Create Subnet dialog box, in the IP address field, enter an IP address. Click OK.
Step 12 In the Create External Network dialog box, click OK.
Step 13 In the Create Routed Outside dialog box, click Finish.
The route profile for interleak is created and associated with the L3 Outside.
Procedure
Configure the route redistribution route policy using the NX-OS CLI:
a) Create a route profile with tenant as the scope:
Example:
b) Configure the redistribute route profile under BGP for OSPF using one of the route profiles created in the
previous step.
Example:
Note Note that the redistribute route map allows all routes and applies the route profile for the
route-control actions. In the example, all OSPF learnt routes are redistributed into BGP with
tag 100.
Procedure
<fvRsCustQosPol tnQosCustomPolName=""/>
<l3extSubnet aggregate="" descr="" ip="14.15.16.0/24" name=""
scope="export-rtctrl,import-security"/>
</l3extInstP>
<ospfExtP areaCost="1" areaCtrl="redistribute,summary" areaId="0.0.0.1" areaType="nssa"
descr=""/>
</l3extOut>
Overview
Endpoint IP and MAC addresses are learned by the ACI fabric through common network methods such as
ARP, GARP, and ND. ACI also uses an internal method that learns IP and MAC addresses through the
dataplane.
Dataplane IP learning per VRF is unique to the ACI network much in the same way as endpoint learning.
While endpoint learning is identified as both IP and MAC, dataplane IP learning is specific to IP addressing
only in VRFs. In APIC, you can enable or disable dataplane IP learning at the VRF level.
Procedure
Step 1 Navigate to Tenants > tenant_name > Networking > VRFs > vrf_name .
Step 2 On the VRF - vrf_name work pane, click the Policy tab.
Step 3 Scroll to the bottom of the Policy work pane and locate IP Data-plane Learning.
Procedure
Overview
The IP Aging policy tracks and ages unused IP addresses on an endpoint. Tracking is performed using the
endpoint retention policy configured for the bridge domain to send ARP requests (for IPv4) and neighbor
solicitations (for IPv6) at 75% of the local endpoint aging interval. When no response is received from an IP
address, that IP address is aged out.
This document explains how to configure the IP Aging policy.
Procedure
• Disabled—Disables IP aging
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an End Point Retention policy by
navigating to Tenants > tenant-name > Policies > Protocol, right-click End Point Retention, and choose
Create End Point Retention Policy.
Procedure
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an Endpoint Retention policy.
Procedure
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an Endpoint Retention policy by
sending a post with XML such as the following example:
<fvEpRetPol bounceAgeIntvl="630" bounceTrig="protocol"
holdIntvl="350" lcOwn="local" localEpAgeIntvl="900" moveFreq="256"
name="EndpointPol1" remoteEpAgeIntvl="350"/>
Neighbor Discovery
The IPv6 Neighbor Discovery (ND) protocol is responsible for the address auto configuration of nodes,
discovery of other nodes on the link, determining the link-layer addresses of other nodes, duplicate address
detection, finding available routers and DNS servers, address prefix discovery, and maintaining reachability
information about the paths to other active neighbor nodes.
ND-specific Neighbor Solicitation or Neighbor Advertisement (NS or NA) and Router Solicitation or Router
Advertisement (RS or RA) packet types are supported on all ACI fabric Layer 3 interfaces, including physical,
Layer 3 sub interface, and SVI (external and pervasive). Up to APIC release 3.1(1x), RS/RA packets are used
for auto configuration for all Layer 3 interfaces but are only configurable for pervasive SVIs.
Starting with APIC release 3.1(2x), RS/RA packets are used for auto configuration and are configurable on
Layer 3 interfaces including routed interface, Layer 3 sub interface, and SVI (external and pervasive).
ACI bridge domain ND always operates in flood mode; unicast mode is not supported.
The ACI fabric ND support includes the following:
• Interface policies (nd:IfPol) control ND timers and behavior for NS/NA messages.
• ND prefix policies (nd:PfxPol) control RA messages.
• Configuration of IPv6 subnets for ND (fv:Subnet).
• ND interface policies for external networks.
• Configurable ND subnets for external networks, and arbitrary subnet configurations for pervasive bridge
domains are not supported.
• Configurable Static Adjacencies: (<vrf, L3Iface, ipv6 address> --> mac address)
• Dynamic Adjacencies: Learned via exchange of NS/NA packets
• Per Interface
• Control of ND packets (NS/NA)
• Neighbor Solicitation Interval
• Neighbor Solicitation Retry count
• Control of RA packets
• Suppress RA
• Suppress RA MTU
• RA Interval, RA Interval minimum, Retransmit time
Create a tenant, VRF, bridge domain with a neighbor discovery interface policy and a neighbor discovery
prefix policy.
Example:
<fvTenant descr="" dn="uni/tn-ExampleCorp" name="ExampleCorp" ownerKey="" ownerTag="">
<ndIfPol name="NDPol001" ctrl="managed-cfg” descr="" hopLimit="64" mtu="1500"
nsIntvl="1000" nsRetries=“3" ownerKey="" ownerTag="" raIntvl="600" raLifetime="1800"
reachableTime="0" retransTimer="0"/>
<fvCtx descr="" knwMcastAct="permit" name="pvn1" ownerKey="" ownerTag=""
pcEnfPref="enforced">
</fvCtx>
<fvBD arpFlood="no" descr="" mac="00:22:BD:F8:19:FF" multiDstPktAct="bd-flood" name="bd1"
ownerKey="" ownerTag="" unicastRoute="yes" unkMacUcastAct="proxy" unkMcastAct="flood">
<fvRsBDToNdP tnNdIfPolName="NDPol001"/>
<fvRsCtx tnFvCtxName="pvn1"/>
<fvSubnet ctrl="nd" descr="" ip="34::1/64" name="" preferred="no" scope="private">
<fvRsNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</fvSubnet>
<fvSubnet ctrl="nd" descr="" ip="33::1/64" name="" preferred="no" scope="private">
<fvRsNdPfxPol tnNdPfxPolName="NDPfxPol002"/>
</fvSubnet>
</fvBD>
<ndPfxPol ctrl="auto-cfg,on-link" descr="" lifetime="1000" name="NDPfxPol001" ownerKey=""
ownerTag="" prefLifetime="1000"/>
<ndPfxPol ctrl="auto-cfg,on-link" descr="" lifetime="4294967295" name="NDPfxPol002"
ownerKey="" ownerTag="" prefLifetime="4294967295"/>
</fvTenant>
Note If you have a public subnet when you configure the routed outside, you must associate the bridge
domain with the outside configuration.
Configuring a Tenant, VRF, and Bridge Domain with IPv6 Neighbor Discovery
on the Bridge Domain Using the NX-OS Style CLI
Procedure
Step 1 Configure an IPv6 neighbor discovery interface policy and assign it to a bridge domain:
a) Create an IPv6 neighbor discovery interface policy:
Example:
Step 2 Configure an IPV6 bridge domain subnet and neighbor discovery prefix policy on the subnet:
Example:
Creating the Tenant, VRF, and Bridge Domain with IPv6 Neighbor Discovery
on the Bridge Domain Using the GUI
This task shows how to create a tenant, a VRF, and a bridge domain (BD) within which two different types
of Neighbor Discovery (ND) policies are created. They are ND interface policy and ND prefix policy. While
ND interface policies are deployed under BDs, ND prefix policies are deployed for individual subnets. Each
BD can have its own ND interface policy . The ND interface policy is deployed on all IPv6 interfaces by
default. In Cisco APIC, there is already an ND interface default policy available to use. If desired, you can
create a custom ND interface policy to use instead. The ND prefix policy is on a subnet level. Every BD can
have multiple subnets, and each subnet can have a different ND prefix policy.
Procedure
Step 7 In the Create ND RA Prefix Policy dialog box, perform the following actions:
a) In the Name field, enter the name for the prefix policy.
Note For a given subnet there can only be one prefix policy. It is possible for each subnet to have a
different prefix policy, although subnets can use a common prefix policy.
Step 8 In the ND policy field drop-down list, click Create ND Interface Policy and perform the following tasks:
a) In the Name field, enter a name for the policy.
b) Click Submit.
Step 9 Click OK to complete the bridge domain configuration.
Similarly you can create additional subnets with different prefix policies as required.
A subnet with an IPv6 address is created under the BD and an ND prefix policy has been associated with it.
Note The steps here show how to associate an IPv6 neighbor discovery interface policy with a Layer 3 interface.
The specific example shows how to configure using the non-VPC interface.
Procedure
Step 1 In the Navigation pane, navigate to the appropriate external routed network under the appropriate Tenant.
Step 2 Under External Routed Networks, expand > Logical Node Profiles > Logical Node Profile_name > Logical
Interface Profiles.
Step 3 Double-click the appropriate Logical Interface Profile, and in the Work pane, click Policy > Routed
Interfaces > .
Note If you do not have a Logical Interface Profile created, you can create a profile here.
Step 4 In the Routed Interface dialog box, perform the following actions:
a) In the ND RA Prefix field, check the check box to enable ND RA prefix for the interface.
When enabled, the routed interface is available for auto configuration.
Also, the ND RA Prefix Policy field is displayed.
b) In the ND RA Prefix Policy field, from the drop-down list, choose the appropriate policy.
c) Choose other values on the screen as desired. Click Submit.
Note When you configure using a VPC interface, you must enable the ND RA prefix for both side
A and side B as both are members in the VPC configuration. In the Work Pane, in the Logical
Interface Profile screen, click the SVI tab. Under Properties, check the check boxes to enable the
ND RA Prefix for both Side A and Side B. Choose the identical ND RA Prefix Policy for Side A
and Side B.
Configure an IPv6 neighbor discovery interface policy and associate it with a Layer 3 interface:
The following example displays the configuration in a non-VPC set up.
Example:
Note For VPC ports, ndPfxP must be a child of l3extMember instead of l3extRsNodeL3OutAtt. The
following code snippet shows the configuration in a VPC setup.
<l3extLNodeP name="lnodeP001">
<l3extRsNodeL3OutAtt rtrId="11.11.205.1" rtrIdLoopBack="yes"
tDn="topology/pod-2/node-2011"/>
<l3extRsNodeL3OutAtt rtrId="12.12.205.1" rtrIdLoopBack="yes"
tDn="topology/pod-2/node-2012"/>
<l3extLIfP name="lifP002">
<l3extRsPathL3OutAtt addr="0.0.0.0" encap="vlan-205" ifInstT="ext-svi"
llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit"
tDn="topology/pod-2/protpaths-2011-2012/pathep-[vpc7]" >
<l3extMember addr="2001:20:25:1::1/64" descr="" llAddr="::" name=""
nameAlias="" side="A">
<ndPfxP >
<ndRsPfxPToNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</ndPfxP>
</l3extMember>
<l3extMember addr="2001:20:25:1::2/64" descr="" llAddr="::" name=""
nameAlias="" side="B">
<ndPfxP >
<ndRsPfxPToNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</ndPfxP>
</l3extMember>
</l3extRsPathL3OutAtt>
<l3extRsNdIfPol tnNdIfPolName="NDPol001"/> </l3extLIfP>
</l3extLNodeP>
Procedure
Step 2 tenant tenant_name Creates a tenant and enters the tenant mode.
Example:
Step 4 ipv6 nd mtu mtu value Assigns an MTU value to the IPv6 ND policy.
Example:
apic1(config-tenant-template-ipv6-nd)#
ipv6 nd mtu 1500
apic1(config-tenant-template-ipv6)# exit
apic1(config-tenant-template)# exit
apic1(config-tenant)#
Step 7 vrf member VRF_name Associates the VRF with the Layer 3 Out.
Example:
Step 8 external-l3 epg instp l3out l3extOut001 Assigns the Layer 3 Out and the VRF to a
Layer 3 interface.
Example:
Step 10 vrf context tenant ExampleCorp vrf pvn1 Associates the VRF to the leaf switch.
l3out l3extOut001
Example:
apic1(config-leaf-vrf)# exit
Step 12 vrf member tenant ExampleCorp vrf pvn1 Specifies the associated Tenant, VRF, Layer
l3out l3extOut001 3 Out in the interface.
Example:
Step 13 ipv6 address 2001:20:21:22::2/64 preferred Specifies the primary or preferred IPv6
address.
Example:
Step 15 inherit ipv6 nd NDPol001 Configures the ND policy under the Layer 3
interface.
Example:
Step 1 Disable the Neighbor Discovery Duplicate Address Detection process for a subnet by changing the value of
the ipv6Dad entry for that subnet to disabled.
The following example shows how to set the Neighbor Discovery Duplicate Address Detection entry for the
2001:DB8:A::11/64 subnet to disabled:
Note In the following REST API example, long single lines of text are broken up with the \ character to
improve readability.
Example:
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
Step 2 Enter the show ipv6 int command on the leaf switch to verify that the configuration was pushed out correctly
to the leaf switch. For example:
swtb23-leaf5# show ipv6 int vrf icmpv6:v1
IPv6 Interface Status for VRF "icmpv6:v1"(9)
Procedure
Step 1 Navigate to the appropriate page to access the DAD field for that interface. For example:
a) Navigate to Tenants > Tenant > Networking > External Routed Networks > L3Out > Logical Node
Profiles > node > Logical Interface Profiles, then select the interface that you want to configure.
b) Click on Routed Sub-interfaces or SVI, then click on the Create (+) button to configure that interface.
Step 2 For this interface, make the following settings for the DAD entries:
• For the primary address, set the value for the DAD entry to enabled.
• For the shared secondary address, set the value for the DAD entry to disabled. Note that if the secondary
address is not shared across border leaf switches, then you do not need to disable the DAD for that
address.
Example:
For example, if you were configuring this setting for the SVI interface, you would:
• Set the Side A IPv6 DAD to enabled.
• Set the Side B IPv6 DAD to disabled.
Example:
As another example, if you were configuring this setting for the routed sub-interface interface, you would:
• In the main Select Routed Sub-Interface page, set the value for IPv6 DAD for the routed sub-interface
to enabled.
• Click on the Create (+) button on the IPv4 Secondary/IPv6 Additional Addresses area to access the Create
Secondary IP Address page, then set the value for IPv6 DAD to disabled. Then click on the OK button
to apply the changes in this screen.
Layer 3 Multicast
In the ACI fabric, most unicast and multicast routing operate together on the same border leaf switches, with
the multicast protocol operating over the unicast routing protocols.
In this architecture, only the border leaf switches run the full Protocol Independent Multicast (PIM) protocol.
Non-border leaf switches run PIM in a passive mode on the interfaces. They do not peer with any other PIM
routers. The border leaf switches peer with other PIM routers connected to them over L3 Outs and also with
each other.
The following figure shows the border leaf (BL) switches (BL1 and BL2) connecting to routers (R1 and R2)
in the multicast cloud. Each virtual routing and forwarding (VRF) in the fabric that requires multicast routing
will peer separately with external multicast routers.
Note The user must configure a unique loopback address on each border leaf on each VRF that is enables multicast
routing.
Any loopback configured for unicast routing can be reused. This loopback address must be routed from the
external network and will be injected into the fabric MPBGP (Multiprotocol Border Gateway Protocol) routes
1
The GIPO (Group IP outer address) is the destination multicast address used in the outer IP header of the VXLAN packet for all multi-destination packets
(Broadcast, Unknown unicast, and Multicast) packets forwarded within the fabric.
for the VRF. The fabric interface source IP will be set to this loopback as the loopback interface. The following
figure shows the fabric for multicast routing.
Figure 23: Fabric for Multicast routing
Note Layer 3 Out ports and sub-interfaces are supported while external SVIs are not supported. Since external SVIs
are not supported, PIM cannot be enabled in L3-VPC.
VRFs, but not a combination of VRFs and BDs. APIC will be required to ascertain this. In order to handle
the VRF GIPo in addition to the BD GIPos already handled and build GIPo trees for them, IS-IS is modified.
All multicast traffic for PIM enabled BDs will be forwarded using the VRF GIPo. This includes both Layer
2 and Layer 3 IP multicast. Any broadcast or unicast flood traffic on the multicast enabled BDs will continue
to use the BD GIPo. Non-IP multicast enabled BDs will use the BD GIPo for all multicast, broadcast, and
unicast flood traffic.
The APIC GUI will display a GIPo multicast address for all BDs and VRFs. The address displayed is always
a /28 network address (the last four bits are zero). When the VXLAN packet is sent in the fabric, the destination
multicast GIPo address will be an address within this /28 block and is used to select one of 16 FTAG trees.
This acheives load balancing of multicast traffic across the fabirc.
First-Hop Functionality
The directly connected leaf will handle the first-hop functionality needed for PIM sparse mode.
The Last-Hop
The last-hop router is connected to the receiver and is responsible for doing a shortest-path tree (SPT) switchover
in case of PIM any-source multicast (ASM). The border leaf switches will handle this functionality. The
non-border leaf switches do not participate in this function.
Fast-Convergence Mode
The fabric supports a configurable fast-convergence mode where every border leaf switch with external
connectivity towards the root (RP for (*,G) and source for (S, G)) pulls traffic from the external network. To
prevent duplicates, only one of the BL switches forwards the traffic to the fabric. The BL that forwards the
traffic for the group into the fabric is called the designated forwarder (DF) for the group. The stripe winner
for the group decides on the DF. If the stripe winner has reachability to the root, then the stripe winner is also
the DF. If the stripe winner does not have external connectivity to the root, then that BL chooses a DF by
sending a PIM join over the fabric interface. All non-stripe winner BL switches with external reachability to
the root send out PIM joins to attract traffic but continue to have the fabric interface as the RPF interface for
the route. This results in the traffic reaching the BL switch on the external link, but getting dropped.
The advantage of the fast-convergence mode is that when there is a stripe owner change due to a loss of a BL
switch for example, the only action needed is on the new stripe winner of programming the right Reverse
Path Forwarding (RPF) interface. There is no latency incurred by joining the PIM tree from the new stripe
winner. This comes at the cost of the additional bandwidth usage on the non-stripe winners' external links.
2
All multicast group membership information is stored in the COOP database on the spines. When a border leaf boots up it pulls this information from the spine
Note Fast-convergence mode can be disabled in deployments where the cost of additional bandwidth outweighs
the convergence time saving.
• A multicast content provider is in one VRF while different departments of an organization are receiving
the multicast content in different VRFs.
ACI release 4.0 adds support for inter-VRF multicast, which enables sources and receivers to be in different
VRFs. This allows the receiver VRF to perform the reverse path forwarding (RPF) lookup for the multicast
route in the source VRF. When a valid RPF interface is formed in the source VRF, this enables an outgoing
interface (OIF) in the receiver VRF. All inter-VRF multicast traffic will be forwarded within the fabric in the
source VRF. The inter-VRF forwarding and translation is performed on the leaf switch where the receivers
are connected.
Note • For any-source multicast, the RP used must be in the same VRF as the source.
• Source and receiver VRFs can be in an EPG or connected behind an L3 Out.
For ACI, inter-VRF multicast is configured per receiver VRF. Every NBL/BL that has the receiver VRF will
get the same inter-VRF configuration. Each NBL that may have directly connected receivers, and BLs that
may have external receivers, need to have the source VRF deployed. Control plane signaling and data plane
forwarding will do the necessary translation and forwarding between the VRFs inside the NBL/BL that has
receivers. Any packets forwarded in the fabric will be in the source VRF.
• FX models:
• N9K-93108TC-FX
• N9K-93180YC-FX
• N9K-C9348GC-FXP
• FX2 models:
• N9K-93240YC-FX2
• N9K-C9336C-FX2
• PIM is supported on Layer 3 Out routed interfaces and routed subinterfaces including Layer 3 port-channel
interfaces. PIM is not supported on Layer 3 Out SVI interfaces.
• Enabling PIM on an L3Out causes an implicit external network to be configured. This action results in
the L3Out being deployed and protocols potentially coming up even if you have not defined an external
network.
• For Layer 3 multicast support, when the ingress leaf switch receives a packet from a source that is attached
on a bridge domain, and the bridge domain is enabled for multicast routing, the ingress leaf switch sends
only a routed VRF copy to the fabric (routed implies that the TTL is decremented by 1, and the source-mac
is rewritten with a pervasive subnet MAC). The egress leaf switch also routes the packet into receivers
in all the relevant bridge domains. Therefore, if a receiver is on the same bridge domain as the source,
but on a different leaf switch than the source, that receiver continues to get a routed copy, although it is
in the same bridge domain. This also applies if the source and receiver are on the same bridge domain
and on the same leaf switch, if PIM is enabled on this bridge domain.
For more information, see details about Layer 3 multicast support for multipod that leverages existing
Layer 2 design, at the following link Adding Pods.
• Starting with Release 3.1(1x), Layer 3 multicast is supported with FEX. Multicast sources or receivers
that are connected to FEX ports are supported. For further details about how to add FEX in your testbed,
see Configure a Fabric Extender with Application Centric Infrastructure at this URL:
https://www.cisco.com/c/en/us/support/docs/cloud-systems-management/
application-policy-infrastructure-controller-apic/200529-Configure-a-Fabric-Extender-with-Applica.html.
For releases preceeding Release 3.1(1x), Layer 3 multicast is not supported with FEX. Multicast sources
or receivers that are connected to FEX ports are not supported.
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections
to external routers, or multipod connections through an Inter-Pod Network (IPN), it is critical that the MTU
is set appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max packet size to be set as 9216
bytes for ACI and 9000 for NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Note Click the help icon (?) located in the top-right corner of the Work pane and of each dialog box for information
about a visible tab or a field.
Procedure
Step 1 Navigate to Tenants > Tenant_name > Networking > VRFs > VRF_name > Multicast.
In the Work pane, a message is displayed as follows: PIM is not enabled on this VRF. Would you like to
enable PIM?.
Step 2 Click YES, ENABLE MULTICAST.
Step 3 Configure interfaces:
• Fabric RP
1. Expand the Fabric RP table.
2. Enter the appropriate value in each field.
3. Click Update.
• Auto-RP
1. Enter the appropriate value in each field.
Step 2 Enter the configure mode for a tenant, the configure mode for the VRF, and configure PIM options.
Example:
apic1(config)# tenant tenant1
apic1(config-tenant)# vrf context tenant1_vrf
apic1(config-tenant-vrf)# ip pim
apic1(config-tenant-vrf)# ip pim fast-convergence
apic1(config-tenant-vrf)# ip pim bsr forward
Step 3 Configure IGMP and the desired IGMP options for the VRF.
Example:
apic1(config-tenant-vrf)# ip igmp
apic1(config-tenant-vrf)# exit
apic1(config-tenant)# interface bridge-domain tenant1_bd
apic1(config-tenant-interface)# ip multicast
apic1(config-tenant-interface)# ip igmp allow-v3-asm
apic1(config-tenant-interface)# ip igmp fast-leave
apic1(config-tenant-interface)# ip igmp inherit interface-policy igmp_intpol1
apic1(config-tenant-interface)# exit
Step 4 Enter the L3 Out mode for the tenant, enable PIM, and enter the leaf interface mode. Then configure PIM for
this interface.
Example:
apic1(config-tenant)# l3out tenant1_l3out
apic1(config-tenant-l3out)# ip pim
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# exit
apic1(config)#
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/125
apic1(config-leaf-if) ip pim inherit interface-policy pim_intpol1
Step 5 Configure IGMP for the interface using the IGMP commands.
Example:
</fvCtx>
</fvTenant>
Step 2 Configure L3 Out and enable multicast (PIM, IGMP) on the L3 Out.
Example:
<l3extOut enforceRtctrl="export" name="l3out-pim_l3out1">
<l3extRsEctx tnFvCtxName="ctx1"/>
<l3extLNodeP configIssues="" name="bLeaf-CTX1-101">
<l3extRsNodeL3OutAtt rtrId="200.0.0.1" rtrIdLoopBack="yes"
tDn="topology/pod-1/node-101"/>
<l3extLIfP name="if-PIM_Tenant-CTX1" tag="yellow-green">
<igmpIfP/>
<pimIfP>
<pimRsIfPol tDn="uni/tn-PIM_Tenant/pimifpol-pim_pol1"/>
</pimIfP>
<l3extRsPathL3OutAtt addr="131.1.1.1/24" ifInstT="l3-port" mode="regular"
mtu="1500" tDn="topology/pod-1/paths-101/pathep-[eth1/46]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-l3outDom"/>
<l3extInstP name="l3out-PIM_Tenant-CTX1-1topo" >
</l3extInstP>
<pimExtP enabledAf="ipv4-mcast" name="pim"/>
</l3extOut>
Step 3 Configure a BD under the tenant and enable multicast and IGMP on the BD.
Example:
<fvTenant dn="uni/tn-PIM_Tenant" name="PIM_Tenant">
<fvBD arpFlood="yes" mcastAllow="yes" multiDstPktAct="bd-flood" name="bd2" type="regular"
unicastRoute="yes" unkMacUcastAct="flood" unkMcastAct="flood">
<igmpIfP/>
<fvRsBDToOut tnL3extOutName="l3out-pim_l3out1"/>
<fvRsCtx tnFvCtxName="ctx1"/>
<fvRsIgmpsn/>
<fvSubnet ctrl="" ip="41.1.1.254/24" preferred="no" scope="private" virtual="no"/>
</fvBD>
</fvTenant>
Example:
Configuring a static RP:
<fvTenant name="t0">
<pimRouteMapPol name="fabricrp-rtmap">
<pimRouteMapEntry grp="226.20.0.0/24" order="1" />
</pimRouteMapPol>
<fvCtx name="ctx1">
<pimCtxP ctrl="">
<pimFabricRPPol status="">
<pimStaticRPEntryPol rpIp="6.6.6.6">
<pimRPGrpRangePol>
<rtdmcRsFilterToRtMapPol tDn="uni/tn-t0/rtmap-fabricrp-rtmap" />
</pimRPGrpRangePol>
</pimStaticRPEntryPol>
</pimFabricRPPol>
</pimCtxP>
</fvCtx>
</fvTenant>
Note We recommend that you do not disable IGMP snooping on bridge domains. If you disable IGMP snooping,
you may see reduced multicast performance because of excessive false flooding within the bridge domain.
IGMP snooping software examines IP multicast traffic within a bridge domain to discover the ports where
interested receivers reside. Using the port information, IGMP snooping can reduce bandwidth consumption
in a multi-access bridge domain environment to avoid flooding the entire bridge domain. By default, IGMP
snooping is enabled on the bridge domain.
This figure shows the IGMP routing functions and IGMP snooping functions both contained on an ACI leaf
switch with connectivity to a host. The IGMP snooping feature snoops the IGMP membership reports, and
leaves messages and forwards them only when necessary to the IGMP router function.
IGMP snooping operates upon IGMPv1, IGMPv2, and IGMPv3 control plane packets where Layer 3 control
plane packets are intercepted and influence the Layer 2 forwarding behavior.
IGMP snooping has the following proprietary features:
• Source filtering that allows forwarding of multicast packets based on destination and source IP addresses
• Multicast forwarding based on IP addresses rather than the MAC address
• Multicast forwarding alternately based on the MAC address
The ACI fabric supports IGMP snooping only in proxy-reporting mode, in accordance with the guidelines
provided in Section 2.1.1, "IGMP Forwarding Rules," in RFC 4541:
As a result, the ACI fabric will send IGMP reports with the source IP address of 0.0.0.0.
Note For more information about IGMP snooping, see RFC 4541.
Virtualization Support
You can define multiple virtual routing and forwarding (VRF) instances for IGMP snooping.
On leaf switches, you can use the show commands with a VRF argument to provide a context for the information
displayed. The default VRF is used if no VRF argument is supplied.
The APIC IGMP Snooping Function, IGMPv1, IGMPv2, and the Fast Leave
Feature
Both IGMPv1 and IGMPv2 support membership report suppression, which means that if two hosts on the
same subnet want to receive multicast data for the same group, the host that receives a member report from
the other host suppresses sending its report. Membership report suppression occurs for hosts that share a port.
If no more than one host is attached to each switch port, you can configure the fast leave feature in IGMPv2.
The fast leave feature does not send last member query messages to hosts. As soon as APIC receives an IGMP
leave message, the software stops forwarding multicast data to that port.
IGMPv1 does not provide an explicit IGMP leave message, so the APIC IGMP snooping function must rely
on the membership message timeout to indicate that no hosts remain that want to receive multicast data for a
particular group.
Note The IGMP snooping function ignores the configuration of the last member query interval when you enable
the fast leave feature because it does not check for remaining hosts.
Note The IP address for the querier should not be a broadcast IP address, multicast IP address, or 0 (0.0.0.0).
When an IGMP snooping querier is enabled, it sends out periodic IGMP queries that trigger IGMP report
messages from hosts that want to receive IP multicast traffic. IGMP snooping listens to these IGMP reports
to establish appropriate forwarding.
The IGMP snooping querier performs querier election as described in RFC 2236. Querier election occurs in
the following configurations:
• When there are multiple switch queriers configured with the same subnet on the same VLAN on different
switches.
• When the configured switch querier is in the same subnet as with other Layer 3 SVI queriers.
Procedure
Step 1 Click the Tenants tab and the name of the tenant on whose bridge domain you intend to configure IGMP
snooping support.
Step 2 In the Navigation pane, click Networking > Protocol Policies > IGMP Snoop.
Step 3 Right-click IGMP Snoop and select Create IGMP Snoop Policy.
Step 4 In the Create IGMP Snoop Policy dialog, configure a policy as follows:
a) In the Name and Description fields, enter a policy name and optional description.
b) In the Admin State field, select Enabled or Disabled to enable or disable this entire policy.
c) Select or unselect Fast Leave to enable or disable IGMP V2 immediate dropping of queries through this
policy.
d) Select or unselect Enable querier to enable or disable the IGMP querier activity through this policy.
Note For this option to be effectively enabled, the Subnet Control: Querier IP setting must also be
enabled in the subnets assigned to the bridge domains to which this policy is applied. The
navigation path to the properties page on which this setting is located is Tenants >
tenant_name > Networking > Bridge Domains > bridge_domain_name > Subnets >
subnet_name.
e) Specify in seconds the Last Member Query Interval value for this policy.
IGMP uses this value when it receives an IGMPv2 Leave report. This means that at least one host wants
to leave the group. After it receives the Leave report, it checks that the interface is not configured for
IGMP Fast Leave and if not, it sends out an out-of-sequence query.
f) Specify in seconds the Query Interval value for this policy.
This value is used to define the amount of time the IGMP function will store a particular IGMP state if it
does not hear any reports on the group.
g) Specify in seconds Query Response Interval value for this policy.
When a host receives the query packet, it starts counting to a random value, less that the maximum response
time. When this timer expires, host replies with a report.
h) Specify the Start query Count value for this policy.
Number of queries sent at startup that are separated by the startup query interval. Values range from 1 to
10. The default is 2.
i) Specify in seconds a Start Query Interval for this policy.
By default, this interval is shorter than the query interval so that the software can establish the group state
as quickly as possible. Values range from 1 to 18,000 seconds. The default is 31 seconds.
The new IGMP Snoop policy is listed in the Protocol Policies - IGMP Snoop summary page.
What to do next
To put this policy into effect, assign it to any bridge domain.
Note For the Enable Querier option on the assigned policy to be effectively enabled, the Subnet Control: Querier
IP setting must also be enabled in the subnets assigned to the bridge domains to which this policy is applied.
The navigation path to the properties page on which this setting is located is Tenants > tenant_name >
Networking > Bridge Domains > bridge_domain_name > Subnets > subnet_name.
Procedure
Step 1 Click the APIC Tenants tab and select the name of the tenant whose bridge domains you intend to configure
with an IGMP Snoop policy.
Step 2 In the APIC navigation pane, click Networking > Bridge Domains, then select the bridge domain to which
you intend to apply your policy-specified IGMP Snoop configuration.
Step 3 On the main Policy tab, scroll down to the IGMP Snoop Policy field and select the appropriate IGMP policy
from the drop-down menu.
Step 4 Click Submit.
The target bridge domain is now associated with the specified IGMP Snooping policy.
Procedure
Step 2 Modify the snooping policy as necessary. The example NX-OS style CLI sequence:
Example: • Specifies a custom value for the
query-interval value in the IGMP Snooping
apic1(config-tenant-template-ip-igmp-snooping)# policy named cookieCut1.
ip igmp snooping query-interval 300
apic1(config-tenant-template-ip-igmp-snooping)# • Confirms the modified IGMP Snooping
show run all value for the policy cookieCut1.
# Command: show running -config all
Step 3 Assign the policy to a bridge domain. The example NX-OS style CLI sequence:
Example: • Navigates to bridge domain, BD3. for the
query-interval value in the IGMP Snooping
apic1(config-tenant)# int bridge-domain policy named cookieCut1.
bd3
apic1(config-tenant-interface)# ip igmp • Assigns the IGMP Snooping policy with
snooping policy cookieCut1 a modified IGMP Snooping value for the
policy cookieCut1.
What to do next
You can assign the IGMP Snooping policy to multiple bridge domains.
To configure an IGMP Snooping policy and assign it to a bridge domain, send a post with XML such as the
following example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<fvTenant name="mcast_tenant1">
<!-- Create an IGMP snooping template, and provide the options -->
<igmpSnoopPol name="igmp_snp_bd_21"
adminSt="enabled"
lastMbrIntvl="1"
queryIntvl="125"
rspIntvl="10"
startQueryCnt="2"
startQueryIntvl="31"
/>
<fvCtx name="ip_video"/>
<fvBD name="bd_21">
<fvRsCtx tnFvCtxName="ip_video"/>
This example creates and configures the IGMP Snooping policy, igmp_snp_bd_12 with the following properties,
and binds the IGMP policy, igmp_snp_bd_21, to bridge domain, bd_21:
• Administrative state is enabled
• Last Member Query Interval is the default 1 second.
• Query Interval is the default 125.
• Query Response interval is the default 10 seconds
• The Start Query Count is the default 2 messages
• The Start Query interval is 35 seconds.
• Deploying an EPG on a Specific Port with APIC Using the NX-OS Style CLI
• Deploying an EPG on a Specific Port with APIC Using the REST API
Enabling IGMP Snooping and Multicast on Static Ports Using the GUI
You can enable IGMP snooping and multicast on ports that have been statically assigned to an EPG. Afterwards
you can create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Note For details on static port assignment, see Deploying an EPG on a Specific Node
or Port Using the GUI in the Cisco APIC Layer 2 Networking Configuration
Guide.
• Identify the IP addresses that you want to be recipients of IGMP snooping and multicast traffic.
Procedure
Step 1 Click Tenant > tenant_name > Application Profiles > application_name > Application EPGs > epg_name >
Static Ports.
Navigating to this spot displays all the ports you have statically assigned to the target EPG.
Step 2 Click the port to which you intend to statically assign group members for IGMP snooping.
This action displays the Static Path page.
Step 3 On the IGMP Snoop Static Group table, click + to add an IGMP Snoop Address Group entry.
Adding an IGMP Snoop Address Group entry associates the target static port with a specified multicast IP
address and enables it to process the IGMP snoop traffic received at that address.
a) In the Group Address field, enter the multicast IP address to associate with his interface and this EPG.
b) In the Source Address field enter the IP address of the source to the multicast stream, if applicable.
c) Click Submit.
When configuration is complete, the target interface is enabled to process IGMP Snooping protocol traffic
sent to its associated multicast IP address.
Note You can repeat this step to associate additional multicast addresses with the target static port.
Enabling IGMP Snooping and Multicast on Static Ports in the NX-OS Style CLI
You can enable IGMP snooping and multicast on ports that have been statically assigned to an EPG. Then
you can create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
The steps described in this task assume the pre-configuration of the following entities:
• Tenant: tenant_A
• Application: application_A
• EPG: epg_A
• Bridge Domain: bridge_domain_A
• vrf: vrf_A -- a member of bridge_domain_A
• VLAN Domain: vd_A (configured with a range of 300-310)
• Leaf switch: 101 and interface 1/10
The target interface 1/10 on switch 101 is associated with VLAN 305 and statically linked with tenant_A,
application_A, epg_A
• Leaf switch: 101 and interface 1/11
The target interface 1/11 on switch 101 is associated with VLAN 309 and statically linked with tenant_A,
application_A, epg_A
Note For details on static port assignment, see Deploying an EPG on a Specific Port
with APIC Using the NX-OS Style CLI in the Cisco APIC Layer 2 Networking
Configuration Guide.
• Identify the IP addresses that you want to be recipients of IGMP snooping multicast traffic.
Procedure
apic1# conf t
apic1(config)# tenant tenant_A;
application application_A; epg epg_A
apic1(config-tenant-app-epg)# ip igmp
snooping static-group 227.1.1.1 leaf 101
interface ethernet 1/11 vlan 309
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
Enabling IGMP Snooping and Multicast on Static Ports Using the REST API
You can enable IGMP snooping and multicast processing on ports that have been statically assigned to an
EPG. You can create and assign access groups of users that are permitted or denied access to the IGMP snoop
and multicast traffic enabled on those ports.
Procedure
To configure application EPGs with static ports, enable those ports to receive and process IGMP snooping
and multicast traffic, and assign groups to access or be denied access to that traffic, send a post with XML
such as the following example.
In the following example, IGMP snooping is enabled on leaf 102 interface 1/10 on VLAN 202. Multicast
IP addresses 224.1.1.1 and 225.1.1.1 are associated with this port.
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<fvTenant name="tenant_A">
<fvAp name="application">
<fvAEPg name="epg_A">
<fvRsPathAtt encap="vlan-202" instrImedcy="immediate" mode="regular"
tDn="topology/pod-1/paths-102/pathep-[eth1/10]">
<!-- IGMP snooping static group case -->
<igmpSnoopStaticGroup group="224.1.1.1" source="0.0.0.0"/>
<igmpSnoopStaticGroup group="225.1.1.1" source="2.2.2.2"/>
</fvRsPathAtt>
</fvAEPg>
</fvAp>
</fvTenant>
Enabling Group Access to IGMP Snooping and Multicast Using the GUI
After you enable IGMP snooping and multicasting on ports that have been statically assigned to an EPG, you
can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Note For details on static port assignment, see Deploying an EPG on a Specific Node or Port Using the GUI in the
Cisco APIC Layer 2 Networking Configuration Guide.
Procedure
Step 1 Click Tenant > tenant_name > Application Profiles > application_name > Application EPGs > epg_name >
Static Ports.
Navigating to this spot displays all the ports you have statically assigned to the target EPG.
Step 2 Click the port to which you intend to assign multicast group access, to display the Static Port Configuration
page.
Step 3 Click Actions > Create IGMP Snoop Access Group to display the IGMP Snoop Access Group table.
Step 4 Locate the IGMP Snoop Access Group table and click + to add an access group entry.
Adding an IGMP Snoop Access Group entry creates a user group with access to this port, associates it with
a multicast IP address, and permits or denies that group access to the IGMP snoop traffic received at that
address.
a) select Create Route Map Policy to display the Create Route Map Policy window.
b) In the Name field assign the name of the group that you want to allow or deny multicast traffic.
c) In the Route Maps table click + to display the route map dialog.
d) In the Order field, if multiple access groups are being configured for this interface, select a number that
reflects the order in which this access group will be permitted or denied access to the multicast traffic on
this interface. Lower-numbered access groups are ordered before higher-numbered access groups.
e) In the Group IP field enter the multicast IP address whose traffic is to be allowed or blocked for this
access group.
f) In the Source IP field, enter the IP address of the source if applicable.
g) In the Action field, choose Deny to deny access for the target group or Permit to allow access for the
target group.
h) Click OK.
i) Click Submit.
When the configuration is complete, the configured IGMP snoop access group is assigned a multicast IP
address through the target static port and permitted or denied access to the multicast streams that are received
at that address.
Note • You can repeat this step to configure and associate additional access groups with multicast IP
addresses through the target static port.
• To review the settings for the configured access groups, click to the following location: Tenant >
tenant_name > Networking > > Protocol Policies > Route Maps >
route_map_access_group_name.
Enabling Group Access to IGMP Snooping and Multicast using the NX-OS
Style CLI
After you have enabled IGMP snooping and multicast on ports that have been statically assigned to an EPG,
you can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
The steps described in this task assume the pre-configuration of the following entities:
• Tenant: tenant_A
• Application: application_A
• EPG: epg_A
• Bridge Domain: bridge_domain_A
• vrf: vrf_A -- a member of bridge_domain_A
• VLAN Domain: vd_A (configured with a range of 300-310)
• Leaf switch: 101 and interface 1/10
The target interface 1/10 on switch 101 is associated with VLAN 305 and statically linked with tenant_A,
application_A, epg_A
• Leaf switch: 101 and interface 1/11
The target interface 1/11 on switch 101 is associated with VLAN 309 and statically linked with tenant_A,
application_A, epg_A
Note For details on static port assignment, see Deploying an EPG on a Specific Port with APIC Using the NX-OS
Style CLI in the Cisco APIC Layer 2 Networking Configuration Guide.
Procedure
Step 3 Specify the access group connection path. The example sequences configure:
Example: • Route-map-access group "foobroker"
apic1(config-tenant)# application connected through leaf switch 101,
application_A interface 1/10, and VLAN 305.
apic1(config-tenant-app)# epg epg_A
apic1(config-tenant-app-epg)# ip igmp • Route-map-access group "newbroker"
snooping access-group route-map fooBroker connected through leaf switch 101,
leaf 101 interface ethernet 1/10 vlan
305
interface 1/10, and VLAN 305.
Enabling Group Access to IGMP Snooping and Multicast using the REST API
After you have enabled IGMP snooping and multicast on ports that have been statically assigned to an EPG,
you can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Procedure
To define the access group, F23broker, send a post with XML such as in the following example.
The example configures access group F23broker, associated with tenant_A, Rmap_A, application_A, epg_A,
on leaf 102, interface 1/10, VLAN 202. By association with Rmap_A, the access group F23broker has access
to multicast traffic received at multicast address 226.1.1.1/24 and is denied access to traffic received at multicast
address 227.1.1.1/24.
Example:
About HSRP
HSRP is a first-hop redundancy protocol (FHRP) that allows a transparent failover of the first-hop IP router.
HSRP provides first-hop routing redundancy for IP hosts on Ethernet networks configured with a default
router IP address. You use HSRP in a group of routers for selecting an active router and a standby router. In
a group of routers, the active router is the router that routes packets, and the standby router is the router that
takes over when the active router fails or when preset conditions are met.
Many host implementations do not support any dynamic router discovery mechanisms but can be configured
with a default router. Running a dynamic router discovery mechanism on every host is not practical for many
reasons, including administrative overhead, processing overhead, and security issues. HSRP provides failover
services to such hosts.
When you use HSRP, you configure the HSRP virtual IP address as the default router of the host (instead of
the IP address of the actual router). The virtual IP address is an IPv4 or IPv6 address that is shared among a
group of routers that run HSRP.
When you configure HSRP on a network segment, you provide a virtual MAC address and a virtual IP address
for the HSRP group. You configure the same virtual address on each HSRP-enabled interface in the group.
You also configure a unique IP address and MAC address on each interface that acts as the real address. HSRP
selects one of these interfaces to be the active router. The active router receives and routes packets destined
for the virtual MAC address of the group.
HSRP detects when the designated active router fails. At that point, a selected standby router assumes control
of the virtual MAC and IP addresses of the HSRP group. HSRP also selects a new standby router at that time.
HSRP uses a priority designator to determine which HSRP-configured interface becomes the default active
router. To configure an interface as the active router, you assign it with a priority that is higher than the priority
of all the other HSRP-configured interfaces in the group. The default priority is 100, so if you configure just
one interface with a higher priority, that interface becomes the default active router.
Interfaces that run HSRP send and receive multicast User Datagram Protocol (UDP)-based hello messages
to detect a failure and to designate active and standby routers. When the active router fails to send a hello
message within a configurable period of time, the standby router with the highest priority becomes the active
router. The transition of packet forwarding functions between the active and standby router is completely
transparent to all hosts on the network.
You can configure multiple HSRP groups on an interface. The virtual router does not physically exist but
represents the common default router for interfaces that are configured to provide backup to each other. You
do not need to configure the hosts on the LAN with the IP address of the active router. Instead, you configure
them with the IP address of the virtual router (virtual IP address) as their default router. If the active router
fails to send a hello message within the configurable period of time, the standby router takes over, responds
to the virtual addresses, and becomes the active router, assuming the active router duties. From the host
perspective, the virtual router remains the same.
Note Packets received on a routed port destined for the HSRP virtual IP address terminate on the local router,
regardless of whether that router is the active HSRP router or the standby HSRP router. This process includes
ping and Telnet traffic. Packets received on a Layer 2 (VLAN) interface destined for the HSRP virtual IP
address terminate on the active router.
HSRP Versions
Cisco APIC supports HSRP version 1 by default. You can configure an interface to use HSRP version 2.
HSRP version 2 has the following enhancements to HSRP version 1:
• Expands the group number range. HSRP version 1 supports group numbers from 0 to 255. HSRP version
2 supports group numbers from 0 to 4095.
• For IPv4, uses the IPv4 multicast address 224.0.0.102 or the IPv6 multicast address FF02::66 to send
hello packets instead of the multicast address of 224.0.0.2, which is used by HSRP version 1.
• Uses the MAC address range from 0000.0C9F.F000 to 0000.0C9F.FFFF for IPv4 and 0005.73A0.0000
through 0005.73A0.0FFF for IPv6 addresses. HSRP version 1 uses the MAC address range
0000.0C07.AC00 to 0000.0C07.ACFF.
• Currently, only one IPv4 and one IPv6 group is supported on the same sub-interface in Cisco ACI. Even
when dual stack is configured, Virtual MAC must be the same in IPv4 and IPv6 HSRP configurations.
• BFD IPv4 and IPv6 is supported when the network connecting the HSRP peers is a pure layer 2 network.
You must configure a different router MAC address on the leaf switches. The BFD sessions become
active only if you configure different MAC addresses in the leaf interfaces.
• Users must configure the same MAC address for IPv4 and IPv6 HSRP groups for dual stack configurations.
• HSRP VIP must be in the same subnet as the interface IP.
• It is recommended that you configure interface delay for HSRP configurations.
• HSRP is only supported on routed-interface or sub-interface. HSRP is not supported on VLAN interfaces
and switched virtual interface (SVI). Therefore, no VPC support for HSRP is available.
• Object tracking on HSRP is not supported.
• HSRP Management Information Base (MIB) for SNMP is not supported.
• Multiple group optimization (MGO) is not supported with HSRP.
• ICMP IPv4 and IPv6 redirects are not supported.
• Cold Standby and Non-Stop Forwarding (NSF) are not supported because HSRP cannot be restarted in
the Cisco ACI environment.
• There is no extended hold-down timer support as HSRP is supported only on leaf switches. HSRP is not
supported on spine switches.
• HSRP version change is not supported in APIC. You must remove the configuration and reconfigure
with the new version.
• HSRP version 2 does not inter-operate with HSRP version 1. An interface cannot operate both version
1 and version 2 because both versions are mutually exclusive. However, the different versions can be
run on different physical interfaces of the same router.
• Route Segmentation is programmed in Cisco Nexus 93128TX, Cisco Nexus 9396PX, and Cisco Nexus
9396TX leaf switches when HSRP is active on the interface. Therefore, there is no DMAC=router MAC
check conducted for route packets on the interface. This limitation does not apply for Cisco Nexus
93180LC-EX, Cisco Nexus 93180YC-EX, and Cisco Nexus 93108TC-EX leaf switches.
• HSRP configurations are not supported in the Basic GUI mode. The Basic GUI mode has been deprecated
starting with APIC release 3.0(1).
• Fabric to Layer 3 Out traffic will always load balance across all the HSRP leaf switches, irrespective of
their state. If HSRP leaf switches span multiple pods, the fabric to out traffic will always use leaf switches
in the same pod.
• This limitation applies to some of the earlier Cisco Nexus 93128TX, Cisco Nexus 9396PX, and Cisco
Nexus 9396TX switches. When using HSRP, the MAC address for one of the routed interfaces or routed
sub-interfaces must be modified to prevent MAC address flapping on the Layer 2 external device. This
is because Cisco APIC assigns the same MAC address (00:22:BD:F8:19:FF) to every logical interface
under the interface logical profiles.
Version 1
Delay 0
Reload Delay 0
Group ID 0
Group Af IPv4
Priority 100
Preempt Delay 0
Procedure
Step 1 On the menu bar, click > Tenants > Tenant_name. In the Navigation pane, click Networking > External
Routed Networks > External Routed Network_name > Logical Node Profiles > Logical Interface Profile.
An HSRP interface profile will be created here.
Step 2 Choose a logical interface profile, and click Create HSRP Interface Profile.
Step 3 In the Create HSRP Interface Profile dialog box, perform the following actions:
a) In the Version field, choose the desired version.
b) In the HSRP Interface Policy field, from the drop-down, choose Create HSRP Interface Policy.
c) In the Create HSRP Interface Policy dialog box, in the Name field, enter a name for the policy.
d) In the Control field, choose the desired control.
e) In the Delay field and the Reload Delay field, set the desired values. Click Submit.
The HSRP interface policy is created and associated with the interface profile.
Step 4 In the Create HSRP Interface Profile dialog box, expand HSRP Interface Groups.
Step 5 In the Create HSRP Group Profile dialog box, perform the following actions:
a) In the Name field, enter an HSRP interface group name.
b) In the Group ID field, choose the appropriate ID.
The values available depend upon whether HSRP version 1 or version 2 was chosen in the interface profile.
c) In the IP field, enter an IP address.
The IP address must be in the same subnet as the interface.
d) In the MAC address field, enter a Mac address.
e) In the Group Name field, enter a group name.
This is the name used in the protocol by HSRP for the HSRP MGO feature.
f) In the Group Type field, choose the desired type.
g) In the IP Obtain Mode field, choose the desired mode.
h) In the HSRP Group Policy field, from the drop-down list, choose Create HSRP Group Policy.
Step 6 In the Create HSRP Group Policy dialog box, perform the following actions:
a) In the Name field, enter an HSRP group policy name.
b) The Key or Password field is automatically populated.
The default value for authentication type is simple, and the key is "cisco." This is selected by default when
a user creates a new policy.
c) In the Type field, choose the level of security desired.
d) In the Priority field choose the priority to define the active router and the standby router.
e) In the remaining fields, choose the desired values, and click Submit.
The HSRP group policy is created.
f) Create secondary virtual IPs by populating the Secondary Virtual IPs field.
This can be used to enable HSRP on each sub-interface with secondary virtual IPs. The IP address that
you provide here also must be in the subnet of the interface.
g) Click OK.
Step 7 In the Create HSRP Interface Profile dialog box, click Submit.
This completes the HSRP configuration.
Step 8 To verify the HSRP interface and group policies created, in the Navigation pane, click Networking > Protocol
Policies > HSRP.
Procedure
Procedure
• The interface profile for the leaf switches must be configured as required.
Procedure
<polUni>
<fvTenant name="t9" dn="uni/tn-t9" descr="">
<hsrpIfPol name="hsrpIfPol" ctrl="bfd" delay="4" reloadDelay="11"/>
</fvTenant>
</polUni>
All tenant WAN connections use a single session on the spine switches where the WAN routers are connected.
This aggregation of tenant BGP sessions towards the Data Center Interconnect Gateway (DCIG) improves
control plane scale by reducing the number of tenant BGP sessions and the amount of configuration required
for all of them. The network is extended out using Layer 3 subinterfaces configured on spine fabric ports.
Transit routing with shared services using GOLF is not supported.
A Layer 3 external outside network (L3extOut) for GOLF physical connectivity for a spine switch is specified
under the infra tenant, and includes the following:
• LNodeP (l3extInstP is not required within the L3Out in the infra tenant. )
• A provider label for the L3extOut for GOLF in the infra tenant.
• OSPF protocol policies
• BGP protocol policies
All regular tenants use the above-defined physical connectivity. The L3extOut defined in regular tenants
requires the following:
• An l3extInstP (EPG) with subnets and contracts. The scope of the subnet is used to control import/export
route control and security policies. The bridge domain subnet must be set to advertise externally and it
must be in the same VRF as the application EPG and the GOLF L3Out EPG.
• Communication between the application EPG and the GOLF L3Out EPG is governed by explicit contracts
(not Contract Preferred Groups).
• An l3extConsLbl consumer label that must be matched with the same provider label of an L3Out for
GOLF in the infra tenant. Label matching enables application EPGs in other tenants to consume the
LNodeP external L3Out EPG.
• The BGP EVPN session in the matching provider L3extOut in the infra tenant advertises the tenant
routes defined in this L3Out.
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections
to external routers, or multipod connections through an Inter-Pod Network (IPN), it is critical that the MTU
is set appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max packet size to be set as 9216
bytes for ACI and 9000 for NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Route Target Configuration between the Spine Switches and the DCI
There are two ways to configure EVPN route targets (RTs) for the GOLF VRFs: Manual RT and Auto RT.
The route target is synchronized between ACI spines and DCIs through OpFlex. Auto RT for GOLF VRFs
has the Fabric ID embedded in the format: – ASN: [FabricID] VNID
If two sites have VRFs deployed as in the following diagram, traffic between the VRFs can be mixed.
Site 1 Site 2
To avoid this happening on the DCI, route maps are used with different BGP communities on the inbound
and outbound peer policies.
When routes are received from the GOLF spine at one site, the outbound peer policy towards the GOLF spine
at another site filters the routes based on the community in the inbound peer policy. A different outbound peer
policy strips off the community towards the WAN. All the route-maps are at peer level.
Procedure
Step 2 Configure the outbound peer policy to filter routes based on the community in the inbound peer policy.
Example:
ip community-list standard test-com permit 1:1
Step 3 Configure the outbound peer policy to filter the community towards the WAN.
Example:
ip community-list standard test-com permit 1:1
update-source loopback0
send-community both
route-map multi-site-in in
send-community both
Procedure
Step 1 On the menu bar, click Tenants, then click infra to select the infra tenant.
Step 2 In the Navigation pane, expand the Networking option and perform the following actions:
a) Right-click External Routed Networks and click Create Routed Outside for EVPN to open the
wizard.
b) In the Name field, enter a name for the policy.
c) In the Route Target field, choose whether to use automatic or explicit policy-governed BGP route
target filtering policy:
• Automatic - Implements automatic BGP route-target filtering on VRFs associated with this routed
outside configuration.
• Explicit - Implements route-target filtering through use of explicitly configured BGP route-target
policies on VRFs associated with this routed outside configuration.
Note Explicit route target policies are configured in the BGP Route Target Profiles table on
the BGP Page of the Create VRF Wizard. If you select the Automatic option the in
Route Target field, configuring explicit route target policies in the Create VRF Wizard
might cause BGP routing disruptions.
Note Explicit route target policies are configured in the BGP Route Target Profiles table on the
BGP Page of the Create VRF Wizard. If you select the Automatic option the in Route
Target field, configuring explicit route target policies in the Create VRF Wizard might
cause BGP routing disruptions.
d) Complete the configuration options according to the requirements of your Layer 3 connection.
Note In the protocol check boxes area, assure that both BGP and OSPF are checked. GOLF requires
both BGP and OSPF.
e) Click Next to display the Nodes and Interfaces Protocol Profile tab.
f) In the Define Routed Outside section, in the Name field, enter a name.
g) In the Spines table, click + to add a node entry.
h) In the Node ID drop-down list, choose a spine switch node ID.
i) In the Router ID field, enter the router ID.
j) In the Loopback Addresses, in the IP field, enter the IP address. Click Update.
k) In the OSPF Profile for Sub-interfaces/Routed Sub-Interfaces section in the Name field enter the name
of the OSPF Profile for Sun-Interfaces.
l) Click OK.
Note The wizard creates a Logical Node Profile> Configured Nodes> Node Association profile,
that set the Extend Control Peering field to enabled.
Step 3 In the infra > Networking > External Routed Networks section of the Navigation pane, click to select the
Golf policy just created. Enter a Provider Label, (for example, golf) and click Submit.
Step 4 In the Navigation pane for any tenant, expand the tenant_name > Networking and perform the following
actions:
a) Right-click External Routed Networks and click Create Routed Outside to open the wizard.
b) In the Identity dialog box, in the Name field, enter a name for the policy.
c) Complete the configuration options according to the requirements of your Layer 3 connection.
Note In the protocol check boxes area, assure that both BGP and OSPF are checked. GOLF requires
both BGP and OSPF.
d) Assign a Consumer Label. In this example, use golf (that was just created above.
e) Click Next.
f) Configure the External EPG Networks dialog box, and click Finish to deploy the policy.
Cisco ACI GOLF Configuration Example, Using the NX-OS Style CLI
These examples show the CLI commands to configure GOLF Services, which uses the BGP EVPN protocol
over OSPF for WAN routers that are connected to spine switches.
configure
vlan-domain evpn-dom dynamic
exit
spine 111
# Configure Tenant Infra VRF overlay-1 on the spine.
vrf context tenant infra vrf overlay-1
router-id 10.10.3.3
exit
Configure
spine 111
router bgp 100
vrf member tenant infra vrf overlay- 1
neighbor 10.10.4.1 evpn
label golf_aci
update-source loopback 10.10.4.3
remote-as 100
exit
neighbor 10.10.5.1 evpn
label golf_aci2
update-source loopback 10.10.5.3
remote-as 100
exit
exit
exit
configure
tenant sky
vrf context vrf_sky
exit
bridge-domain bd_sky
vrf member vrf_sky
exit
interface bridge-domain bd_sky
ip address 59.10.1.1/24
exit
bridge-domain bd_sky2
vrf member vrf_sky
exit
interface bridge-domain bd_sky2
ip address 59.11.1.1/24
exit
exit
Configuring the BGP EVPN Route Target, Route Map, and Prefix EPG for the Tenant
The following example shows how to configure a route map to advertise bridge-domain subnets through BGP
EVPN.
configure
spine 111
vrf context tenant sky vrf vrf_sky
address-family ipv4 unicast
route-target export 100:1
route-target import 100:1
exit
route-map rmap
ip prefix-list p1 permit 11.10.10.0/24
match bridge-domain bd_sky
exit
match prefix-list p1
exit
route-map rmap2
match bridge-domain bd_sky
exit
match prefix-list p1
exit
exit
Step 1 The following example shows how to deploy nodes and spine switch interfaces for GOLF, using the REST
API:
Example:
POST
https://192.0.20.123/api/mo/uni/golf.xml
Step 2 The XML below configures the spine switch interfaces and infra tenant provider of the GOLF service. Include
this XML structure in the body of the POST message.
Example:
<l3extOut descr="" dn="uni/tn-infra/out-golf" enforceRtctrl="export,import"
name="golf"
ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsEctx tnFvCtxName="overlay-1"/>
<l3extProvLbl descr="" name="golf"
ownerKey="" ownerTag="" tag="yellow-green"/>
<l3extLNodeP configIssues="" descr=""
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.2.2.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="1500"
tDn="topology/pod-1/paths-112/pathep-[eth1/12]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portIf-spine1-2"
ownerKey="" ownerTag="" tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="9.0.0.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="9000"
tDn="topology/pod-1/paths-111/pathep-[eth1/11]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portIf-spine1-1"
ownerKey="" ownerTag="" tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.0.0.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="1500"
tDn="topology/pod-1/paths-111/pathep-[eth1/10]"
targetDscp="unspecified"/>
</l3extLIfP>
<bgpInfraPeerP addr="10.10.3.2"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com"
descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="2" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="150" descr="" name="aspn"/>
</bgpInfraPeerP>
<bgpInfraPeerP addr="10.10.4.1"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com" descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="1" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="100" descr="" name=""/>
</bgpInfraPeerP>
<bgpInfraPeerP addr="10.10.3.1"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com" descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="1" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="100" descr="" name=""/>
</bgpInfraPeerP>
</l3extLNodeP>
<bgpRtTargetInstrP descr="" name="" ownerKey="" ownerTag="" rtTargetT="explicit"/>
<l3extRsL3DomAtt tDn="uni/l3dom-l3dom"/>
<l3extInstP descr="" matchT="AtleastOne" name="golfInstP"
prio="unspecified"
targetDscp="unspecified">
<fvRsCustQosPol tnQosCustomPolName=""/>
</l3extInstP>
<bgpExtP descr=""/>
<ospfExtP areaCost="1"
areaCtrl="redistribute,summary"
areaId="0.0.0.1"
areaType="regular" descr=""/>
</l3extOut>
Step 3 The XML below configures the tenant consumer of the infra part of the GOLF service. Include this XML
structure in the body of the POST message.
Example:
<fvTenant descr="" dn="uni/tn-pep6" name="pep6" ownerKey="" ownerTag="">
<vzBrCP descr="" name="webCtrct"
ownerKey="" ownerTag="" prio="unspecified"
scope="global" targetDscp="unspecified">
<vzSubj consMatchT="AtleastOne" descr=""
name="http" prio="unspecified" provMatchT="AtleastOne"
revFltPorts="yes" targetDscp="unspecified">
<vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
<vzBrCP descr="" name="webCtrct-pod2"
ownerKey="" ownerTag="" prio="unspecified"
scope="global" targetDscp="unspecified">
<vzSubj consMatchT="AtleastOne" descr=""
name="http" prio="unspecified"
provMatchT="AtleastOne" revFltPorts="yes"
targetDscp="unspecified">
<vzRsSubjFiltAtt directives=""
tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
<fvCtx descr="" knwMcastAct="permit"
name="ctx6" ownerKey="" ownerTag=""
pcEnfDir="ingress" pcEnfPref="enforced">
<bgpRtTargetP af="ipv6-ucast"
descr="" name="" ownerKey="" ownerTag="">
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="export"/>
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="import"/>
</bgpRtTargetP>
<bgpRtTargetP af="ipv4-ucast"
descr="" name="" ownerKey="" ownerTag="">
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="export"/>
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="import"/>
</bgpRtTargetP>
<fvRsCtxToExtRouteTagPol tnL3extRouteTagPolName=""/>
<fvRsBgpCtxPol tnBgpCtxPolName=""/>
<vzAny descr="" matchT="AtleastOne" name=""/>
<fvRsOspfCtxPol tnOspfCtxPolName=""/>
<fvRsCtxToEpRet tnFvEpRetPolName=""/>
<l3extGlobalCtxName descr="" name="dci-pep6"/>
</fvCtx>
<fvBD arpFlood="no" descr="" epMoveDetectMode=""
ipLearning="yes"
limitIpLearnToSubnets="no"
llAddr="::" mac="00:22:BD:F8:19:FF"
mcastAllow="no"
multiDstPktAct="bd-flood"
name="bd107" ownerKey="" ownerTag="" type="regular"
unicastRoute="yes"
unkMacUcastAct="proxy"
unkMcastAct="flood"
vmac="not-applicable">
<fvRsBDToNdP tnNdIfPolName=""/>
<fvRsBDToOut tnL3extOutName="routAccounting-pod2"/>
<fvRsCtx tnFvCtxName="ctx6"/>
<fvRsIgmpsn tnIgmpSnoopPolName=""/>
<fvSubnet ctrl="" descr="" ip="27.6.1.1/24"
name="" preferred="no"
scope="public"
virtual="no"/>
<fvSubnet ctrl="nd" descr="" ip="2001:27:6:1::1/64"
name="" preferred="no"
scope="public"
virtual="no">
<fvRsNdPfxPol tnNdPfxPolName=""/>
</fvSubnet>
<fvRsBdToEpRet resolveAct="resolve" tnFvEpRetPolName=""/>
</fvBD>
<fvBD arpFlood="no" descr="" epMoveDetectMode=""
ipLearning="yes"
limitIpLearnToSubnets="no"
llAddr="::" mac="00:22:BD:F8:19:FF"
mcastAllow="no"
multiDstPktAct="bd-flood"
name="bd103" ownerKey="" ownerTag="" type="regular"
unicastRoute="yes"
unkMacUcastAct="proxy"
unkMcastAct="flood"
vmac="not-applicable">
<fvRsBDToNdP tnNdIfPolName=""/>
<fvRsBDToOut tnL3extOutName="routAccounting"/>
<fvRsCtx tnFvCtxName="ctx6"/>
<fvRsIgmpsn tnIgmpSnoopPolName=""/>
<fvSubnet ctrl="" descr="" ip="23.6.1.1/24"
name="" preferred="no"
scope="public"
virtual="no"/>
<fvSubnet ctrl="nd" descr="" ip="2001:23:6:1::1/64"
name="" preferred="no"
scope="public" virtual="no">
<fvRsNdPfxPol tnNdPfxPolName=""/>
</fvSubnet>
scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsProv matchT="AtleastOne"
prio="unspecified" tnVzBrCPName="webCtrct-pod2"/>
</l3extInstP>
<l3extConsLbl descr=""
name="golf2"
owner="infra"
ownerKey="" ownerTag="" tag="yellow-green"/>
</l3extOut>
<l3extOut descr=""
enforceRtctrl="export"
name="routAccounting"
ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsEctx tnFvCtxName="ctx6"/>
<l3extInstP descr=""
matchT="AtleastOne"
name="accountingInst"
prio="unspecified" targetDscp="unspecified">
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr=""
ip="0.0.0.0/0" name=""
scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsProv matchT="AtleastOne" prio="unspecified" tnVzBrCPName="webCtrct"/>
</l3extInstP>
<l3extConsLbl descr=""
name="golf"
owner="infra"
ownerKey="" ownerTag="" tag="yellow-green"/>
</l3extOut>
</fvTenant>
then in order to leak the endpoint to BGP EVPN, a Fabric External Connection Policy must be
configured to provide the ETEP IP address. Otherwise, the host route will not leak to BGP EVPN.
Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the GUI
Enable distributing BGP EVPN type-2 host routes with the following steps:
Procedure
Enabling Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the NX-OS
Style CLI
Procedure
apic1(config-bgp-af)# exit
Enabling Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the REST
API
Enable distributing BGP EVPN type-2 host routes using the REST API, as follows:
Procedure
Step 1 Configure the Host Route Leak policy, with a POST containing XML such as in the following example:
Example:
<bgpCtxAfPol descr="" ctrl="host-rt-leak" name="bgpCtxPol_0 status=""/>
Step 2 Apply the policy to the VRF BGP Address Family Context Policy for one or both of the address families
using a POST containing XML such as in the following example:
Example:
<fvCtx name="vni-10001">
<fvRsCtxToBgpCtxAfPol af="ipv4-ucast" tnBgpCtxAfPolName="bgpCtxPol_0"/>
<fvRsCtxToBgpCtxAfPol af="ipv6-ucast" tnBgpCtxAfPolName="bgpCtxPol_0"/>
</fvCtx>
About Multipod
Multipod enables provisioning a more fault-tolerant fabric comprised of multiple pods with isolated control
plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling between leaf
and spine switches. For example, if leaf switches are spread across different floors or different buildings,
multipod enables provisioning multiple pods per floor or building and providing connectivity between pods
through spine switches.
Multipod uses MP-BGP EVPN as the control-plane communication protocol between the ACI spines in
different Pods.
WAN routers can be provisioned in the Inter-Pod Network (IPN), directly connected to spine switches, or
connected to border leaf switches. Spine switches connected to the IPN are connected to at least one leaf
switch in the pod.
Multipod uses a single APIC cluster for all the pods; all the pods act as a single fabric. Individual APIC
controllers are placed across the pods but they are all part of a single APIC cluster.
Multipod Provisioning
The IPN is not managed by the APIC. It must be preconfigured with the following information:
• Configure the interfaces connected to the spines of all pods. Use the VLAN-4 or VLAN-5 and MTU of
9150 and the associated correct IP addresses. If remote leaf switches are included in any pods, use
VLAN-5 for the multipod interfaces/sub-interfaces.
• Enable OSPF on sub-interfaces with the correct area ID.
• Enable DHCP Relay on IPN interfaces connected to all spines.
• Enable PIM.
• Add bridge domain GIPO range as PIM Bidirectional (bidir) group range (default is 225.0.0.0/8).
A group in bidir mode has only shared tree forwarding capabilities.
• Add 239.255.255.240/28 as PIM bidir group range.
• Enable PIM and IGMP on the interfaces connected to all spines.
Note When deploying PIM bidir, at any given time it is only possible to have a single active RP (Rendezvous
Point) for a given multicast group range. RP redundancy is hence achieved by leveraging a Phantom RP
configuration. Because multicast source information is no longer available in Bidir, the Anycast or MSDP
mechanism used to provide redundancy in sparse-mode is not an option for bidir.
decommission all nodes from other pods before performing the downgrade. The TEP pool configuration
should not be deleted. Please note that with this downgrade, all nodes in other pods will go down.
• Only OSPF regular area is supported under the infra Tenant.
• Up to APIC release 2.0(2), multipod is not supported with Cisco ACI GOLF. In APIC release 2.0 (2)
the two features are supported in the same fabric only over Cisco Nexus 9000 switches without “EX”
on the end of the switch name; for example, N9K-9312TX. Since the 2.1(1) release, the two features can
be deployed together over all switches used in the multipod topologies.
• In a multipod fabric, POD 1 must always exist, and the APIC TEP IP addresses must be from the POD
1 TEP pool.
• In a multipod fabric, if a spine in POD 1 uses the infra tenant L3extOut-1, the TORs of the other pods (
POD 2, POD 3) cannot use the same infra L3extOut (L3extOut-1) for Layer 3 EVPN control plane
connectivity. Each pod must use its own spine switch and infra L3extOut, because it is not supported to
use a pod as a transit for WAN connectivity of other pods.
• In a multipod fabric setup, if a new spine switch is added to a pod, it must first be connected to at least
one leaf switch in the pod. This enables the APIC to discover the spine switch and join it to the fabric.
• After a pod is created and nodes are added in the pod, deleting the pod results in stale entries from the
pod that are active in the fabric. This occurs because the APIC uses open source DHCP, which creates
some resources that the APIC cannot delete when a pod is deleted
• Forward Error Correction (FEC) is enabled for all 100G transceivers by default. Do not use
QSFP-100G-LR4-S / QSFP-100G-LR4 transceivers for multipod configuration.
• The following is required when deploying a pair of Active/Standby Firewalls (FWs) across pods:
Scenario 1: Use of PBR to redirect traffic through the FW:
• Mandates the use of Service Graphs and enables connecting the FW inside/outside interfaces to the
ACI Fabric. This feature is fully supported from the 2.1(1) release.
• Flows from all the compute leaf nodes are always sent to the border leaf nodes connected to the
Active FW.
Scenario 2: Use of L3Out connections between the Fabric and the FW:
• Fully supported starting from 2.0(1) release.
• Only supported with dynamic routing (no static routing) and with Cisco ASA (not with FWs using
VRRP).
• Active FW only peers with the BL nodes in the local Pod. The leafs inject external routing information
into the fabric.
• Dynamic peering sessions must be re-established in the new Pod, due to longer traffic outages after
FW failover.
• If you delete and recreate the Multipod L3out, for example to change the name of a policy, a clean reload
of some of the spine switches in the fabric must be performed. The deletion of the Multipod L3Out causes
one or more of the spine switches in the fabric to lose connectivity to the APICs and these spine switches
are unable to download the updated policy from the APIC. Which spine switches get into such a state
depends upon the deployed topology. To recover from this state, a clean reload must be performed on
these spine switches. The reload is performed using the setup-clean-config.sh command, followed by
the reload command on the spine switch.
Note ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections
to external routers, or multipod connections through an Inter-Pod Network (IPN), it is critical that the MTU
is set appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS, and Cisco IOS, the
configurable MTU value takes into account the IP headers (resulting in a max packet size to be set as 9216
bytes for ACI and 9000 for NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
You can set the global MTU for control plane (CP) packets sent by the nodes (APIC and the switches) in the
fabric at Fabric > Access Policies > Global Policies > CP MTU Policy.
In a multipod topology, the MTU set for the fabric external ports must be greater than or equal to the CP MTU
value set. Otherwise, the fabric external ports might drop the CP MTU packets.
If you change the IPN or CP MTU, Cisco recommends changing the CP MTU value first, then changing the
MTU value on the spine of the remote pod. This reduces the risk of losing connectivity between the pods due
to MTU mismatch.
To decommission a pod, decommission all the nodes in the pod. For instructions, see Decommissioning and
Recommissioning a Pod in Cisco APIC Troubleshooting Guide.
Note You can also use the GUI wizard to add a Cisco Application Centric Infrastructure (ACI) Virtual Pod (vPod)
as a remote extension of the Cisco ACI fabric. For information about Cisco ACI vPod, see the Cisco ACI
vPod documentation.
Procedure
Step 8 In the Configure Interpod Connectivity STEP 3 > Routing Protocols dialog box, in the BGP area, complete
the following steps:
What to do next
Take one of the following actions:
• You can proceed directly with adding a pod, continuing with the procedure Adding a Pod to Create a
Multipod Fabric, on page 268 in this guide.
• Close the Configure Interpod Connectivity dialog box and add the pod later, returning to the procedure
Adding a Pod to Create a Multipod Fabric, on page 268 in this guide.
Procedure
b) In the Pod TEP Pool field, enter the pool address and subnet.
The pod TEP pool represents a range of traffic encapsulation identifiers and is a shared resource and can
be consumed by multiple domains.
c) With the Spine ID selector, choose the spine ID.
Choose more spine IDs by clicking the + (plus) icon.
d) In the Interfaces area, in the Interface field, enter the spine switch interface (slot and port) that is used
to connect to the interpod network (IPN).
e) In the IPv4 Address field, enter the IPv4 gateway address and network mask for the interface.
f) In the MTU (bytes) field, choose a value for the maximum transmit unit (MTU) of the external network.
You can configure another interface by clicking the + (plus) icon.
Step 7 In the Add Physical Pod STEP 3 > External TEP dialog box, complete the following steps:
a) Leave the Use Defaults check box checked or uncheck it to display the optional fields to configure an
external TEP pool.
b) Note the values in the Pod and Internal TEP Pool fields, which are already configured.
c) In the External TEP Pool field, enter the external TEP pool for the physical pod.
The external TEP pool must not overlap the internal TEP pool.
d) In the Dataplane TEP IP field, enter the address that is used to route traffic between pods.
e) (Optional) In the Unicast TEP IP field, enter the unicast TEP IP address.
Cisco APIC automatically configures the unicast TEP IP address when you enter the data plane TEP IP
address.
f) (Optional) Note the value in the nonconfigurable Node field.
g) (Optional) In the Router ID field, enter the IPN router IP address.
Cisco APIC automatically configures the router IP address when you enter the data plane TEP address.
h) In the Loopback Address field, enter the router loopback IP address.
Leave the Loopback Address blank if you use a router IP address.
i) Click Finish.
Procedure
Step 4 Configure the spine switch interface and OSPF configuration as in the following example:
Example:
# Command: show running-config spine
# Time: Mon Aug 1 21:34:41 2016
spine 201
vrf context tenant infra vrf overlay-1
router-id 201.201.201.201
exit
interface ethernet 1/1
exit
router ospf default
vrf member tenant infra vrf overlay-1
area 0.0.0.0 loopback 203.203.203.203
area 0.0.0.0 interpod peering
exit
exit
exit
spine 204
vrf context tenant infra vrf overlay-1
router-id 204.204.204.204
exit
interface ethernet 1/31
vlan-domain member l3Dom
exit
interface ethernet 1/31.4
vrf member tenant infra vrf overlay-1
ip address 204.1.1.1/30
ip router ospf default area 0.0.0.0
ip ospf cost 1
exit
router ospf default
vrf member tenant infra vrf overlay-1
area 0.0.0.0 loopback 204.204.204.204
area 0.0.0.0 interpod peering
exit
exit
exit
ifav4-ifc1#
<fabricSetupPol status=''>
<fabricSetupP podId="1" tepPool="10.0.0.0/16" />
<fabricSetupP podId="2" tepPool="10.1.0.0/16" status='' />
</fabricSetupPol>
http://<apic-name/ip>:80/api/node/mo/uni/controller.xml
<fabricNodeIdentPol>
<fabricNodeIdentP serial="SAL1819RXP4" name="ifav4-leaf1" nodeId="101" podId="1"/>
<fabricNodeIdentP serial="SAL1803L25H" name="ifav4-leaf2" nodeId="102" podId="1"/>
<fabricNodeIdentP serial="SAL1934MNY0" name="ifav4-leaf3" nodeId="103" podId="1"/>
<fabricNodeIdentP serial="SAL1934MNY3" name="ifav4-leaf4" nodeId="104" podId="1"/>
<fabricNodeIdentP serial="SAL1748H56D" name="ifav4-spine1" nodeId="201" podId="1"/>
<fabricNodeIdentP serial="SAL1938P7A6" name="ifav4-spine3" nodeId="202" podId="1"/>
<fabricNodeIdentP serial="SAL1938PHBB" name="ifav4-leaf5" nodeId="105" podId="2"/>
<fabricNodeIdentP serial="SAL1942R857" name="ifav4-leaf6" nodeId="106" podId="2"/>
<fabricNodeIdentP serial="SAL1931LA3B" name="ifav4-spine2" nodeId="203" podId="2"/>
<fabricNodeIdentP serial="FGE173400A9" name="ifav4-spine4" nodeId="204" podId="2"/>
</fabricNodeIdentPol>
<polUni>
<l3extLNodeP name="bSpine">
<l3extRsNodeL3OutAtt rtrId="201.201.201.201" rtrIdLoopBack="no"
tDn="topology/pod-1/node-201">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="yes" name=""/>
<l3extLoopBackIfP addr="201::201/128" descr="" name=""/>
<l3extLoopBackIfP addr="201.201.201.201/32" descr="" name=""/>
</l3extRsNodeL3OutAtt>
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-1/paths-201/pathep-[eth1/1]"
encap='vlan-4' ifInstT='sub-interface' addr="201.1.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-1/paths-201/pathep-[eth1/2]"
encap='vlan-4' ifInstT='sub-interface' addr="201.2.1.1/30" />
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
Sample configuration:
Example:
Sample IPN configuration for Cisco Nexus 9000 series switches:
=================================
feature dhcp
feature pim
system qos
service-policy type network-qos jumbo
service dhcp
ip dhcp relay
ip pim ssm range 232.0.0.0/8
interface Ethernet2/7
no switchport
mtu 9150
no shutdown
interface Ethernet2/7.4
description pod1-spine1
mtu 9150
encapsulation dot1q 4
vrf member fabric-mpod
ip address 201.1.2.2/30
ip router ospf a1 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.0.0.1
ip dhcp relay address 10.0.0.2
ip dhcp relay address 10.0.0.3
no shutdown
interface Ethernet2/9
no switchport
mtu 9150
no shutdown
interface Ethernet2/9.4
description to pod2-spine1
mtu 9150
encapsulation dot1q 4
vrf member fabric-mpod
ip address 203.1.2.2/30
ip router ospf a1 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.0.0.1
ip dhcp relay address 10.0.0.2
ip dhcp relay address 10.0.0.3
no shutdown
interface loopback29
vrf member fabric-mpod
ip address 12.1.1.1/32
router ospf a1
vrf fabric-mpod
router-id 29.29.29.29
Procedure
Step 4 Change the POD ID number to the reflect the current POD of the APIC.
a) Log in to Cisco Integrated Management Controller (CIMC).
The remote leaf switches are added to an existing pod in the fabric. All policies deployed in the main datacenter
are deployed in the remote switches, which behave like local leaf switches belonging to the pod. In this
topology, all unicast traffic is through VXLAN over Layer 3. Layer 2 Broadcast, Unknown Unicast, and
Multicast (BUM) messages are sent using Head End Replication (HER) tunnels without the use of Multicast.
All local traffic on the remote site is switched directly between endpoints, whether physical or virtual. Any
traffic that requires use of the spine switch proxy is forwarded to the main datacenter.
The APIC system discovers the remote leaf switches when they come up. From that time, they can be managed
through APIC, as part of the fabric.
Note • All inter-VRF traffic (pre-release 4.0(1)) goes to the spine switch before being forwarded.
• Before decommissioning a remote leaf, you must first delete the vPC.
Starting in release 4.0(1), Remote Leaf behavior takes on the following characteristics:
• Reduction of WAN bandwidth use by decoupling services from spine-proxy:
• PBR: For local PBR devices or PBR devices behind a vPC, local switching is used without going
to the spine proxy. For PBR devices on orphan ports on a peer remote leaf, a RL-vPC tunnel is used.
This is true when the spine link to the main DC is functional or not functional.
• ERSPAN: For peer destination EPGs, a RL-vPC tunnel is used. EPGs on local orphan or vPC ports
use local switching to the destination EPG. This is true when the spine link to the main DC is
functional or not functional.
• Shared Services: Packets do not use spine-proxy path reducing WAN bandwidth consumption.
• Inter-VRF traffic is forwarded via an upstream router and not placed on the spine.
• This enhancement is only applicable for a remote leaf vPC pair. For communication across remote
leaf pairs, a spine proxy is still used.
• Resolution of unknown L3 endpoints (through ToR glean process) in a remote leaf site when spine-proxy
is not reachable.
You can configure Remote Leaf in the APIC GUI, either with and without a wizard, or use the REST API or
the NX-OS style CLI.
Note In Cisco APIC Release 4.0(x), the following features are supported that were not previously:
• Q-in-Q Encapsulation Mapping for EPGs
• PBR Tracking on remote leaf switches (with system-level global GIPo enabled)
• PBR Resilient Hashing
• Netflow
• MacSec Encryption
• Troubleshooting Wizard
• Atomic counters
Stretching of L3out SVI between local leaf switches (ACI main data center switches) and remote leaf switches
is not supported.
The following deployments and configurations are not supported with the remote leaf switch feature:
• Spanning Tree Protocol across remote leaf site and main data center
• APIC controllers directly connected to remote leaf switches
• Orphan port-channel or physical ports on remote leaf switches, with a vPC domain (this restriction applies
for releases 3.1 and earlier)
• With and without service node integration, local traffic forwarding within a remote location is only
supported if the consumer, provider, and services nodes are all connected to Remote Leaf switches are
in vPC mode
Full fabric and tenant policies are supported on remote leaf switches, in this release, except for the following
features:
• ACI Multi-Site
• Layer 2 Outside Connections (except Static EPGs)
• 802.1Q Tunnels
• Copy services with vzAny contract
• FCoE connections on remote leaf switches
• Flood in encapsulation for bridge domains or EPGs
• Fast Link Failover policies
• Managed Service Graph-attached devices at remote locations
• Traffic Storm Control
• Cloud Sec Encryption
Bandwidth in the WAN must be a minimum of 100 Mbps and maximum supported latency is 300 msecs.
• It is recommended, but not required to connect the pair of remote leaf switches with a vPC. The switches
on both ends of the vPC must be remote leaf switches at the same remote datacenter.
• Configure the northbound interfaces as Layer 3 sub-interfaces on VLAN-4, with unique IP addresses.
If you connect more than one interface from the remote leaf switch to the router, configure each interface
with a unique IP address.
• Enable OSPF on the interfaces.
• The IP addresses in the remote leaf switch TEP Pool subnet must not overlap with the pod TEP subnet
pool. The subnet used must be /24 or lower.
• Multipod is supported, but not required, with the Remote Leaf feature.
• When connecting a pod in a single-pod fabric with remote leaf switches, configure an L3Out from a
spine switch to the WAN router and an L3Out from a remote leaf switch to the WAN router, both using
VLAN-4 on the switch interfaces.
• When connecting a pod in a multipod fabric with remote leaf switches, configure an L3Out from a spine
switch to the WAN router and an L3Out from a remote leaf switch to the WAN router, both using VLAN-4
on the switch interfaces. Also configure a multipod-internal L3Out using VLAN-5 to support traffic that
crosses pods destined to a remote leaf switch. The regular multipod and multipod-internal connections
can be configured on the same physical interfaces, as long as they use VLAN-4 and VLAN-5.
• When configuring the Multipod-internal L3Out, use the same router ID as for the regular multipod L3Out,
but deselect the Use Router ID as Loopback Address option for the router-id and configure a different
loopback IP address. This enables ECMP to function.
Procedure
Step 1 To define the TEP pool for two remote leaf switches to be connected to a pod, send a post with XML such as
the following example:
Example:
<fabricSetupPol>
<fabricSetupP tepPool="10.0.0.0/16" podId="1" >
<fabricExtSetupP tepPool="30.0.128.0/20" extPoolId="1"/>
</fabricSetupP>
<fabricSetupP tepPool="10.1.0.0/16" podId="2" >
<fabricExtSetupP tepPool="30.1.128.0/20" extPoolId="1"/>
</fabricSetupP>
</fabricSetupPol>
Step 2 To define the node identity policy, send a post with XML, such as the following example:
Example:
<fabricNodeIdentPol>
<fabricNodeIdentP serial="SAL17267Z7W" name="leaf1" nodeId="101" podId="1"
extPoolId="1" nodeType="remote-leaf-wan"/>
<fabricNodeIdentP serial="SAL27267Z7W" name="leaf2" nodeId="102" podId="1"
extPoolId="1" nodeType="remote-leaf-wan"/>
Step 3 To configure the Fabric External Connection Profile, send a post with XML such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="1">
<fvFabricExtConnP dn="uni/tn-infra/fabricExtConnP-1" id="1" name="Fabric_Ext_Conn_Pol1"
rt="extended:as2-nn4:5:16" siteId="0">
<l3extFabricExtRoutingP name="test">
<l3extSubnet ip="150.1.0.0/16" scope="import-security"/>
</l3extFabricExtRoutingP>
<l3extFabricExtRoutingP name="ext_routing_prof_1">
<l3extSubnet ip="204.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="209.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="202.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="207.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="200.0.0.0/8" scope="import-security"/>
<l3extSubnet ip="201.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="210.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="209.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="203.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="208.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="207.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="100.0.0.0/8" scope="import-security"/>
<l3extSubnet ip="201.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="210.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="203.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="208.2.0.0/16" scope="import-security"/>
</l3extFabricExtRoutingP>
<fvPodConnP id="1">
<fvIp addr="100.11.1.1/32"/>
</fvPodConnP>
<fvPodConnP id="2">
<fvIp addr="200.11.1.1/32"/>
</fvPodConnP>
<fvPeeringP type="automatic_with_full_mesh"/>
</fvFabricExtConnP>
</imdata>
Step 4 To configure an L3Out on VLAN-4, that is required for both the remote leaf switches and the spine switch
connected to the WAN router, enter XML such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<polUni>
<l3extLoopBackIfP addr="102.102.102.112"/>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<ospfIfP authKeyId="1" authType="none">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol" />
</ospfIfP>
<l3extRsPathL3OutAtt addr="10.0.254.233/30" encap="vlan-5" ifInstT="sub-interface"
tDn="topology/pod-2/paths-202/pathep-[eth5/2]"/>
<l3extRsPathL3OutAtt addr="10.0.255.229/30" encap="vlan-5" ifInstT="sub-interface"
tDn="topology/pod-1/paths-102/pathep-[eth5/2]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP matchT="AtleastOne" name="ipnInstP" />
</l3extOut>
</fvTenant>
</polUni>
Step 5 To configure the multipod L3Out on VLAN-5, that is required for both multipod and the remote leaf topology,
send a post such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<polUni>
<fvTenant name="infra">
<l3extOut name="rleaf-wan-test">
<ospfExtP areaId='57' multipodinternal='yes'/>
<bgpExtP/>
<l3extRsEctx tnFvCtxName="overlay-1"/>
<l3extRsL3DomAtt tDn="uni/l3dom-l3extDom1"/>
<l3extProvLbl descr="" name="prov_mp1" ownerKey="" ownerTag="" tag="yellow-green"/>
<l3extLNodeP name="rleaf-101">
<l3extRsNodeL3OutAtt rtrId="202.202.202.202" tDn="topology/pod-1/node-101">
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<l3extRsPathL3OutAtt ifInstT="sub-interface"
tDn="topology/pod-1/paths-101/pathep-[eth1/49]" addr="202.1.1.2/30" mac="AA:11:22:33:44:66"
encap='vlan-4'/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
<l3extLNodeP name="rlSpine-201">
<l3extRsNodeL3OutAtt rtrId="201.201.201.201" rtrIdLoopBack="no"
tDn="topology/pod-1/node-201">
<!--
<l3extLoopBackIfP addr="201::201/128" descr="" name=""/>
<l3extLoopBackIfP addr="201.201.201.201/32" descr="" name=""/>
-->
<l3extLoopBackIfP addr="::" />
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<l3extRsPathL3OutAtt ifInstT="sub-interface"
tDn="topology/pod-1/paths-201/pathep-[eth8/36]" addr="201.1.1.1/30" mac="00:11:22:33:77:55"
encap='vlan-4'/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
Procedure
Step 4 Configure two L3Outs for the infra tenant, one for the remote leaf connections and one for the multipod IPN.
Example:
Step 5 Configure the spine switch interfaces and sub-interfaces to be used by the L3Outs.
Example:
Step 6 Configure the remote leaf switch interface and sub-interface used for communicating with the main fabric
pod.
Example:
(config)# leaf 101
apic1(config-leaf)# vrf context tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-leaf-vrf)# exit
apic1(config-leaf)#
apic1(config-leaf)# interface ethernet 1/49
apic1(config-leaf-if)# vlan-domain member ospfDom
apic1(config-leaf-if)# exit
apic1(config-leaf)# router ospf default
apic1(config-leaf-ospf)# vrf member tenant infra vrf overlay-1
Example
The following example provides a downloadable configuration:
apic1# configure
apic1(config)# system remote-leaf-site 5 pod 2 tep-pool 192.0.0.0/16
apic1(config)# system switch-id FDO210805SKD 109 ifav4-leaf9 pod 2
remote-leaf-site 5 node-type remote-leaf-wan
apic1(config)# vlan-domain ospfDom
apic1(config-vlan)# vlan 4-5
apic1(config-vlan)# exit
apic1(config)# tenant infra
apic1(config-tenant)# l3out rl-wan-test
apic1(config-tenant-l3out)# vrf member overlay-1
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# l3out ipn-multipodInternal
apic1(config-tenant-l3out)# vrf member overlay-1
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# exit
apic1(config)#
apic1(config)# spine 201
apic1(config-spine)# vrf context tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-spine-vrf)# exit
apic1(config-spine)# vrf context tenant infra vrf overlay-1 l3out ipn-multipodInternal
apic1(config-spine-vrf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36
apic1(config-spine-if)# vlan-domain member ospfDom
apic1(config-spine-if)# exit
apic1(config-spine)# router ospf default
apic1(config-spine-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-spine-ospf-vrf)# area 5 l3out rl-wan-test
apic1(config-spine-ospf-vrf)# exit
apic1(config-spine-ospf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36.4
apic1(config-spine-if)# vrf member tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-spine-if)# ip router ospf default area 5
apic1(config-spine-if)# exit
apic1(config-spine)# router ospf multipod-internal
apic1(config-spine-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-spine-ospf-vrf)# area 5 l3out ipn-multipodInternal
apic1(config-spine-ospf-vrf)# exit
apic1(config-spine-ospf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36.5
apic1(config-spine-if)# vrf member tenant infra vrf overlay-1 l3out ipn-multipodInternal
apic1(config-spine-if)# ip router ospf multipod-internal area 5
apic1(config-spine-if)# exit
apic1(config-spine)# exit
apic1(config)#
apic1(config)# leaf 101
Configure the Pod and Fabric Membership for Remote Leaf Switches Using a
Wizard
You can configure and enable Cisco APIC to discover and connect the IPN router and remote switches, using
a wizard as in this topic, or in an alternative method using the APIC GUI. See Configure the Pod and Fabric
Membership for Remote Leaf Switches Using the GUI (Without a Wizard), on page 291
Procedure
Configure the Pod and Fabric Membership for Remote Leaf Switches Using
the GUI (Without a Wizard)
You can configure remote leaf switches using this GUI procedure, or use a wizard. For the wizard procedure,
see Configure the Pod and Fabric Membership for Remote Leaf Switches Using a Wizard, on page 290
Procedure
Step 1 Configure the TEP pool for the remote leaf switches, with the following steps:
a) On the menu bar, click Fabric > Inventory.
d) Click OK.
e) Click the + on the OSPF Interface Profiles table.
f) Enter the name of the interface profile and click Next.
g) Under OSPF Profile, click OSPF Policy and choose a previously created policy or click Create OSPF
Interface Policy.
h) Click Next.
i) Click Routed Sub-Interface, click the + on the Routed Sub-Interfaces table, and enter the following
details:
• Node—Spine switch where the interface is located.
• Path—Interface connected to the IPN router
• Encap—Enter 4 for the VLAN
Step 4 To complete the fabric membership configuration for the remote leaf switches, perform the following steps:
a) Navigate to Fabric > Inventory > Fabric Membership.
At this point, the new remote leaf switches should appear in the list of switches registered in the fabric.
However, they are not recognized as remote leaf switches until you configure the Node Identity Policy,
with the following steps.
b) For each remote leaf switch, double-click on the node in the list, configure the following details, and click
Update:
• Node ID—Remote leaf switch ID
• RL TEP Pool—Identifier for the remote leaf TEP pool, that you previously configured
• Node Name—Name of the remote leaf switch
After you configure the Node Identity Policy for each remote leaf switch, it is listed in the Fabric
Membership table with the role remote leaf.
Step 5 Configure the L3Out for the remote leaf location, with the following steps:
a) Navigate to Tenants > infra > Networking.
b) Right-click External Routed Networks, and choose Create Routed Outside.
c) Enter a name for the L3Out.
d) Click the OSPF checkbox to enable OSPF, and configure the OSPF details the same as on the IPN and
WAN router.
e) Only check the Enable Remote Leaf check box, if the pod where you are adding the remote leaf switches
is part of a multipod fabric.
f) Choose the overlay-1 VRF.
Step 6 Configure the nodes and interfaces leading from the remote leaf switches to the WAN router, with the following
steps:
a) In the Create Routed Outside panel, click the + on the Nodes and Interfaces Protocol Profiles table.
b) Click the + on the Nodes table and enter the following details:
• Node ID—ID for the remote leaf that is connected to the WAN router
• Router ID—IP address for the WAN router
• External Control Peering—only enable if the remote leaf switches are being added to a pod in a
multipod fabric
c) Click OK.
d) Click on the + on OSPF Interface Profiles, and configure the following details for the routed sub-interface
used to connect a remote leaf switch with the WAN router.
• Identity—Name of the OSPF interface profile
• Protocol Profiles—A previously configured OSPF profile or create one
• Interfaces—On the Routed Sub-Interface tab, the path and IP address for the routed sub-interface
leading to the WAN router
Step 7 Configure the Fabric External Connection Profile, with the following steps:
a) Navigate to Tenants > infra > Policies > Protocol.
b) Right-click Fabric Ext Connection Policies and choose Create Intrasite/Intersite Profile.
c) Click the + on Fabric External Routing Profile.
d) Enter the name of the profile and add a subnet for one of the remote leaf switches.
e) Click Update and click Submit.
f) To add the subnet for the second remote leaf switch at the same location, click the Fabric Ext Connection
Profile you created and double-click the Fabric External Routing Profile.
g) Add the subnet for the other remote leaf switch and click Update and Close.
Step 8 To verify that the remote leaf switches are discovered by the APIC, navigate to Fabric > Inventory > Fabric
Membership, or Fabric > Inventory > Pod > Topology.
Step 9 To view the status of the links between the fabric and the remote leaf switches, enter the show ip ospf neighbors
vrf overlay-1 command on the spine switch that is connected to the IPN router.
Step 10 To view the status of the remote leaf switches in the fabric, enter the acidiag fnvread NX-OS style command
on the APIC using the CLI.
Note If you have remote leaf switches deployed, if you downgrade the APIC software from Release 3.1(1) or later,
to an earlier release that does not support the Remote Leaf feature, you must decommission the remote nodes
and remove the remote leaf-related policies (including the TEP Pool), before downgrading. For more information
on decommissioning switches, see Decommissioning and Recommissioning Switches in the Cisco APIC
Troubleshooting Guide.
Before you downgrade remote leaf switches, verify that the followings tasks are complete:
• Delete the vPC domain.
• Delete the vTEP - Virtual Network Adapter if using SCVMM.
• Decommission the remote leaf nodes, and wait 10 -15 minutes after the decommission for the task to
complete.
• Delete the remote leaf to WAN L3out in the infra tenant.
• Delete the infra-l3out with VLAN 5 if using Multipod.
• Delete the remote TEP pools.
In transit routing, multiple L3Out connections within a single tenant and VRF are supported and the APIC
advertises the routes that are learned from one L3Out connection to another L3Out connection. The external
Layer 3 domains peer with the fabric on the border leaf switches. The fabric is a transit Multiprotocol-Border
Gateway Protocol (MP-BGP) domain between the peers.
The configuration for external L3Out connections is done at the tenant and VRF level. The routes that are
learned from the external peers are imported into MP-BGP at the ingress leaf per VRF. The prefixes that are
learned from the L3Out connections are exported to the leaf switches only where the tenant VRF is present.
Note For cautions and guidelines for configuring transit routing, see Guidelines for Transit Routing, on page 303
In this topology, mainframes require the ACI fabric to be a transit domain for external connectivity through
a WAN router and for east-west traffic within the fabric. They push host routes to the fabric to be redistributed
within the fabric and out to external interfaces.
The VIP is the external facing IP address for a particular site or service. A VIP is tied to one or more servers
or nodes behind a service node.
In such scenarios, the policies are administered at the demarcation points and ACI policies need not be imposed.
Layer 4 to Layer 7 route peering is a special use case of the fabric as a transit where the fabric serves as a
transit OSPF or BGP domain for multiple pods. You configure route peering to enable OSPF or BGP peering
on the Layer 4 to Layer 7 service device so that it can exchange routes with the leaf node to which it is
connected. A common use case for route peering is Route Health Injection where the SLB VIP is advertised
over OSPF or iBGP to clients outside the fabric. See L4-L7 Route Peering with Transit Fabric - Configuration
Walkthrough for a configuration walk-through of this scenario.
Figure 36: GOLF L3Outs and a Border Leaf L3Out in a Transit-Routed Configuration
OSPF Yes Yes* Yes Yes* Yes Yes Yes Yes Yes* Yes
(tested (tested
in APIC in
release APIC
1.3c) release
1.2g)
eBGP eBGP Yes Yes* Yes* Yes* Yes Yes* Yes* Yes X Yes*
over (tested (tested (tested (tested (tested (tested
OSPF in in in APIC in in APIC in
APIC APIC release APIC release APIC
release release 1.3c) release 1.3c) release
1.3c) 1.3c) 1.3c) 1.3c)
eBGP Yes Yes Yes* Yes* Yes* Yes* Yes Yes X Yes
over (tested (tested (tested (tested
Direct in in APIC in APIC in
Connection APIC release release APIC
release 1.3c) 1.3c) release
1.3c) 1.3c)
EIGRPv4 Yes Yes Yes Yes Yes Yes Yes Yes X Yes
(tested
in
APIC
release
1.3c)
Static Route Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
(tested (tested
in APIC in
release APIC
1.3c) release
1.2g)
• connec. = connection
• * = Not supported on the same leaf switch
• X = Unsupported/Untested combinations
Transit Routing with a Single L3Out Before APIC, release 2.3(1f), transit routing was not supported
Profile within a single L3Out profile. In APIC, release 2.3(1f) and later,
you can configure transit routing with a single L3Out profile, with
the following limitations:
• If the VRF is unenforced, an external subnet (l3extSubnet)
of 0.0.0.0/0 can be used to allow traffic between the routers
sharing the same L3EPG.
• If the VRF is enforced, an external default subnet (0.0.0.0/0)
cannot be used to match both source and destination prefixes
for traffic within the same Layer 3 EPG. To match all traffic
within the same Layer 3 EPG, the following prefixes are
supported:
• IPv4
• 0.0.0.0/1—with External Subnets for the External
EPG
• 128.0.0.0/1—with External Subnets for the
External EPG
• 0.0.0.0/0—with Import Route Control Subnet,
Aggregate Import
• IPv6
• 0::0/1—with External Subnets for the External
EPG
• 8000::0/1—with External Subnets for the External
EPG
• 0:0/0—with Import Route Control Subnet,
Aggregate Import
Shared Routes: Differences in Hardware Routes shared between VRFs function correctly on Generation 2
Support switches (Cisco Nexus N9K switches with "EX" or "FX" on the
end of the switch model name, or later; for example,
N9K-93108TC-EX). On Generation 1 switches, however, there
may be dropped packets with this configuration, because the
physical ternary content-addressable memory (TCAM) tables that
store routes do not have enough capacity to fully support route
parsing.
OSPF or EIGRP in Back to Back Cisco APIC supports transit routing in export route control policies
Configuration that are configured on the L3Out. These policies control which
transit routes (prefixes) are redistributed into the routing protocols
in the L3Out. When these transit routes are redistributed into
OSPF or EIGRP, they are tagged 4294967295 to prevent routing
loops. The Cisco ACI fabric does not accept routes matching this
tag when learned on an OSPF or EIGRP L3Out. However, in the
following cases, it is necessary to override this behavior:
• When connecting two Cisco ACI fabrics using OSPF or
EIGRP.
• When connecting two different VRFs in the same Cisco ACI
fabric using OSPF or EIGRP.
Advertising BD Subnets Outside the The import and export route control policies only apply to the
Fabric transit routes (the routes that are learned from other external peers)
and the static routes. The subnets internal to the fabric that are
configured on the tenant BD subnets are not advertised out using
the export policy subnets. The tenant subnets are still permitted
using the IP prefix-lists and the route-maps but they are
implemented using different configuration steps. See the following
configuration steps to advertise the tenant subnets outside the
fabric:
1. Configure the tenant subnet scope as Public Subnet in the
subnet properties window.
2. Optional. Set the Subnet Control as ND RA Prefix in the
subnet properties window.
3. Associate the tenant bridge domain (BD) with the external
Layer 3 Outside (L3Out).
4. Create contract (provider or consumer) association between
the tenant EPG and the external EPG.
Setting the BD subnet to Public scope and associating the BD
to the L3Out creates an IP prefix-list and the route-map
sequence entry on the border leaf for the BD subnet prefix.
Advertising a Default Route For external connections to the fabric that only require a default
route, there is support for originating a default route for OSPF,
EIGRP, and BGP L3Out connections. If a default route is received
from an external peer, this route can be redistributed out to another
peer following the transit export route control as described earlier
in this article.
A default route can also be advertised out using a Default Route
Leak policy. This policy supports advertising a default route if it
is present in the routing table or it always supports advertising a
default route. The Default Route Leak policy is configured in the
L3Out connection.
When creating a Default Route Leak policy, follow these
guidelines:
• For BGP, the Always property is not applicable.
• For BGP, when configuring the Scope property, choose
Outside.
• For OSPF, the scope value Context creates a type-5 LSA
while the Scope value Outside creates type-7 LSA. Your
choice depends on the area type configured in the L3Out. If
the area type is Regular, set the scope to Context. If the area
type is NSSA, set the scope to Outside.
• For EIGRP, when choosing the Scope property, you must
choose Context.
BGP
The ACI fabric supports BGP peering with external routers. BGP peers are associated with an l3extOut policy
and multiple BGP peers can be configured per l3extOut. BGP can be enabled at the l3extOut level by defining
the bgpExtP MO under an l3extOut.
Note Although the l3extOut policy contains the routing protocol (for example, BGP with its related VRF), the
L3Out interface profile contains the necessary BGP interface configuration details. Both are needed to enable
BGP.
BGP peer reachability can be through OSPF, EIGRP, a connected interface, static routes, or a loopback. iBGP
or eBGP can be used for peering with external routers. The BGP route attributes from the external router are
preserved since MP-BGP is used for distributing the external routes in the fabric. BGP enables IPv4 and/or
IPv6 address families for the VRF associated with an l3extOut. The address family to enable on a switch is
determined by the IP address type defined in bgpPeerP policies for the l3extOut. The policy is optional; if
not defined, the default will be used. Policies can be defined for a tenant and used by a VRF that is referenced
by name.
You must define at least one peer policy to enable the protocol on each border leaf (BL) switch. A peer policy
can be defined in two places:
• Under l3extRsPathL3OutAtt—a physical interface is used as the source interface.
• Under l3extLNodeP—a loopback interface is used as the source interface.
OSPF
Various host types require OSPF to enable connectivity and provide redundancy. These include mainframe
devices, external pods and service nodes that use the ACI fabric as a Layer 3 transit within the fabric and to
the WAN. Such external devices peer with the fabric through a nonborder leaf switch running OSPF. Configure
the OSPF area as an NSSA (stub) area to enable it to receive a default route and not participate in full-area
routing. Typically, existing routing deployments avoid configuration changes, so a stub area configuration is
not mandated.
You enable OSPF by configuring an ospfExtP managed object under an l3extOut. OSPF IP address family
versions configured on the BL switch are determined by the address family that is configured in the OSPF
interface IP address.
Note Although the l3extOut policy contains the routing protocol (for example, OSPF with its related VRF and
area ID), the Layer 3 external interface profile contains the necessary OSPF interface details. Both are needed
to enable OSPF.
You configure OSPF policies at the VRF level by using the fvRsCtxToOspfCtxPol relation, which you can
configure per address family. If you do not configured it, default parameters are used.
You configure the OSPF area in the ospfExtP managed object, which also exposes IPv6 the required area
properties.
EPG, but packets are dropped in the fabric. The drops occur because the APIC operates in a whitelist model
where the default behavior is to drop all data plane traffic between EPGs, unless it is explicitly permitted by
a contract. The whitelist model applies to external EPGs and application EPGs. When using security policies
that have this option configured, you must configure a contract and a security prefix.
Shared Route Control Subnet—Subnets that are learned from shared L3Outs in inter-VRF leaking must be
marked with this control before being advertised to other VRFs. Starting with APIC release 2.2(2e), shared
L3Outs in different VRFs can communicate with each other using a contract. For more about communication
between shared L3Outs in different VRFs, see the Cisco APIC Layer 3 Networking Configuration Guide.
Shared Security Import Subnet—This control is the same as External Subnets for the External EPG for Shared
L3Out learned routes. If you want traffic to flow from one external EPG to another external EPG or to another
internal EPG, the subnet must be marked with this control. If you do not mark the subnet with this control,
then routes learned from one EPG are advertised to the other external EPG, but packets are dropped in the
fabric.When using security policies that have this option configured, you must configure a contract and a
security prefix.
Aggregate Export, Aggregate Import, and Aggregate Shared Routes—This option adds 32 in front of the
0.0.0.0/0 prefix. Currently, you can only aggregate the 0.0.0.0/0 prefix for the import/export route control
subnet. If the 0.0.0.0/0 prefix is aggregated, no route control profile can be applied to the 0.0.0.0/0 network.
Aggregate Shared Route—This option is available for any prefix that is marked as Shared Route Control
Subnet.
Route Control Profile—The ACI fabric also supports the route-map set clauses for the routes that are advertised
into and out of the fabric. The route-map set rules are configured with the Route Control Profile policies and
the Action Rule Profiles.
The Route Profile Polices are created under the Layer 3 Outside connection. A Route Control Policy can be
referenced by the following objects:
• Tenant BD Subnet
• Tenant BD
• External EPG
• External EPG import/export subnet
Here is an example of using Import Route Control for BGP and setting the local preference for an external
route learned from two different Layer 3 Outsides. The Layer 3 Outside connection for the external connection
to AS300 is configured with the Import Route Control enforcement. An action rule profile is configured to
set the local preference to 200 in the Action Rule Profile for Local Preference window.
The Layer 3 Outside connection External EPG is configured with a 0.0.0.0/0 import aggregate policy to allow
all the routes. This is necessary because the import route control is enforced but any prefixes should not be
blocked. The import route control is enforced to allow setting the local preference. Another import subnet
151.0.1.0/24 is added with a Route Profile that references the Action Rule Profile in the External EPG settings
for Route Control Profile window.
Use the show ip bgp vrf overlay-1 command to display the MP-BGP table. The MP-BGP table on the spine
displays the prefix 151.0.1.0/24 with local preference 200 and a next hop of the border leaf for the BGP 300
Layer 3 Outside connection.
There are two special route control profiles—default-import and default-export. If the user configures using
the names default-import and default-export, then the route control profile is automatically applied at the
Layer3 outside level for both import and export. The default-import and default-export route control profiles
cannot be configured using the 0.0.0.0/0 aggregate.
A route control profile is applied in the following sequential order for fabric routes:
1. Tenant BD subnet
2. Tenant BD
3. Layer3 outside
The route control profile is applied in the following sequential order for transit routes:
1. External EPG prefix
2. External EPG
3. Layer3 outside
• Supported for tenant EPGs ←→ EPGs and tenant EPGs ←→ External EPGs.
If there are no contracts between the external prefix-based EPGs, the traffic is dropped. To allow traffic
between two external EPGs, you must configure a contract and a security prefix. As only prefix filtering is
supported, the default filter can be used in the contract.
External L3Out Connection Contracts
The union of prefixes for L3Out connections is programmed on all the leaf nodes where the L3Out connections
are deployed. When more than two L3Out connections are deployed, the use of the aggregate rule 0.0.0.0/0
can allow traffic to flow between L3Out connections that do not have a contract.
You configure the provider and consumer contract associations and the security import subnets in the L3Out
Instance Profile (instP).
When security import subnets are configured and the aggragate rule, 0.0.0.0/0, is supported, the security
import subnets follow the ACL type rules. The security import subnet rule 10.0.0.0/8 matches all the addresses
from 10.0.0.0 to 10.255.255.255. It is not required to configure an exact prefix match for the prefixes to be
permitted by the route control subnets.
Be careful when configuring the security import subnets if more than two L3Out connections are configured
in the same VRF, due to the union of the rules.
Transit traffic flowing into and out of the same L3Out is dropped by policies when configured with the 0.0.0.0/0
security import subnet. This behavior is true for dynamic or static routing. To prevent this behavior, define
more specific subnets.
Figure 37:
In the examples in this chapter, the Cisco ACI fabric has 2 leaf switches and two spine switches, that are
controlled by an APIC cluster. The border leaf switches 101 and 102 have L3Outs on them providing
connections to two routers and thus to the Internet. The goal of this example is to enable traffic to flow from
EP 1 to EP 2 on the Internet into and out of the fabric through the two L3Outs.
In this example, the tenant that is associated with both L3Outs is t1, with VRF v1.
Before configuring the L3Outs, configure the nodes, ports, functional profiles, AEPs, and a Layer 3 domain.
You must also configure the spine switches 104 and 105 as BGP route reflectors.
Configuring transit routing includes defining the following components:
1. Tenant and VRF
2. Node and interface on leaf 101 and leaf 102
3. Primary routing protocol on each L3Out (used to exchange routes between border leaf switch and external
routers; in this example, BGP)
4. Connectivity routing protocol on each L3Out (provides reachability information for the primary protocol;
in this example, OSPF)
5. Two external EPGs
6. One route map
7. At least one filter and one contract
8. Associate the contract with the external EPGs
Note For transit routing cautions and guidelines, see Guidelines for Transit Routing, on page 303.
The following table lists the names that are used in the examples in this chapter:
Property Names for L3Out1 on Node 101 Names for L3Out2 on Node 102
Tenant t1 t1
VRF v1 v1
Route map rp1 with ctx1 and route destination rp2 with ctx2 and route destination
192.168.1.0/24 192.168.2.0/24
For an example of the XML for these prerequisites, see REST API Example: L3Out Prerequisites, on page
31.
Procedure
• The second L3Out is on node 102, which is named nodep2. Node 102 is configured with router
ID22.22.22.203. It has a routed interface ifp2 at eth1/3, with the IP address, 23.23.23.1/24.
Example:
<l3extOut name="l3out1">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep1">
<l3extRsNodeL3OutAtt rtrId="11.11.11.103" tDn="topology/pod-1/node-101"/>
<l3extLIfP name="ifp1"/>
<l3extRsPathL3OutAtt addr="12.12.12.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/3]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
</l3extOut>
<l3extOut name="l3out2">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep2">
<l3extRsNodeL3OutAtt rtrId="22.22.22.203" tDn="topology/pod-1/node-102"/>
<l3extLIfP name="ifp2"/>
<l3extRsPathL3OutAtt addr="23.23.23.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-102/pathep-[eth1/3]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
</l3extOut>
Step 3 Configure the routing protocol for both border leaf switches.
This example configures BGP as the primary routing protocol for both the border leaf switches, both with
ASN 100. It also configures Node 101 with BGP peer 15.15.15.2 and node 102 with BGP peer 25.25.25.2.
Example:
<l3extOut name="l3out1">
<l3extLNodeP name="nodep1">
<bgpPeerP addr="15.15.15.2/24"
<bgpAsP asn="100"/>
</bgpPeerP>
</l3extLNodeP>
</l3extOut>
<l3extOut name="l3out2">
<l3extLNodeP name="nodep2">
<bgpPeerP addr="25.25.25.2/24"
<bgpAsP asn="100"/>
</bgpPeerP>
</l3extLNodeP>
</l3extOut>
Example:
<l3extOut name="l3out1">
<ospfExtP areaId="0.0.0.0" areaType="regular"/>
<l3extLNodeP name="nodep1">
<l3extLIfP name="ifp1">
<ospfIfP/>
<l3extIfP>
<l3extLNodeP>
</l3extOut>
<l3extOut name="l3out2">
<ospfExtP areaId="0.0.0.0" areaType="regular"/>
<l3extLNodeP name="nodep2">
<l3extLIfP name="ifp2">
<ospfIfP/>
<l3extIfP>
<l3extLNodeP>
</l3extOut>
Step 7 Create the filter and contract to enable the EPGs to communicate.
This example configures the filter http-filter and the contract httpCtrct. The external EPGs and the
application EPGs are already associated with the contract httpCtrct as providers and consumers respectively.
Example:
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
</rtctrlProfile>
<rtctrlProfile name="rp2">
<rtctrlCtxP name="ctxp1" action="permit" order="0">
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="match-rule2"/>
</rtctrlCtxP>
</rtctrlProfile>
</l3extOut>
<rtctrlSubjP name="match-rule1">
<rtctrlMatchRtDest ip="192.168.1.0/24"/>
</rtctrlSubjP>
<rtctrlSubjP name="match-rule2">
<rtctrlMatchRtDest ip="192.168.2.0/24"/>
</rtctrlSubjP>
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
</fvTenant>
</polUni>
For an example of the commands for these prerequisites, see NX-OS Style CLI Example: L3Out Prerequisites,
on page 37.
Procedure
• The first L3Out is on node 101, which is named nodep1. Node 101 is configured with router ID
11.11.11.103. It has a routed interface ifp1 at eth1/3, with the IP address 12.12.12.3/24.
• The second L3Out is on node 102, which is named nodep2. Node 102 is configured with router ID
22.22.22.203. It has a routed interface ifp2 at eth1/3, with the IP address, 23.23.23.1/24.
Example:
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# router-id 11.11.11.103
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member tenant t1 vrf v1
apic1(config-leaf-if)# ip address 12.12.12.3/24
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 102
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# router-id 22.22.22.203
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member tenant t1 vrf v1
apic1(config-leaf-if)# ip address 23.23.23.3/24
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
Step 7 Create filters (access lists) and contracts to enable the EPGs to communicate.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# subject subj1
apic1(config-tenant-contract-subj)# access-group http-filter both
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config)#
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 192.168.1.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# external-l3 epg extnw2
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 192.168.2.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# router bgp 100
apic1(config-leaf-bgp)# vrf member tenant t1 vrf v1
apic1(config-leaf-bgp-vrf)# neighbor 25.25.25.2
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp2 in
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp1 out
apic1(config-leaf-bgp-vrf-neighbor)# exit
apic1(config-leaf-bgp-vrf)# exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# subject http-subj
apic1(config-tenant-contract-subj)# access-group http-filter both
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# contract provider httpCtrct
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# external-l3 epg extnw2
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# contract consumer httpCtrct
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config)#
Procedure
Step 1 To create the tenant and VRF, on the menu bar, choose Tenants > Add Tenant and in the Create Tenant
dialog box, perform the following tasks:
a) In the Name field, enter the tenant name.
b) In the VRF Name field, enter the VRF name.
c) Click Submit.
Note After this step, perform the steps twice to create two L3Outs in the same tenant and VRF for transit
routing.
Step 2 To start creating the L3Out, in the Navigation pane, expand Tenant and Networking and perform the following
steps:
a) Right-click External Routed Networks and choose Create Routed Outside.
b) In the Name field, enter a name for the L3Out.
c) From the VRF drop-down list, choose the VRF you previously created.
d) From the External Routed Domain drop-down list, choose the external routed domain that you
previously created.
e) In the area with the routing protocol check boxes, check the desired protocols (BGP, OSPF, or EIGRP).
For the example in this chapter, choose BGP and OSPF.
Depending on the protocols you choose, enter the properties that must be set.
f) Enter the OSPF details, if you enabled OSPF.
For the example in this chapter, use the OSPF area 0 and type Regular area.
g) Click the + icon to expand Nodes and Interfaces Protocol Profiles.
h) In the Name field, enter a name.
i) Click the + icon to expand Nodes.
j) From the Node ID field drop-down list, choose the node for the L3Out.
k) In the Router ID field, enter the router ID (IPv4 or IPv6 address for the router that is connected to the
L3Out).
l) (Optional) You can configure another IP address for a loopback address. Uncheck Use Router ID as
Loopback Address, expand Loopback Addresses, enter an IP address, and click Update.
m) In the Select Node dialog box, click OK.
Step 3 If you enabled BGP, click the + icon to expand BGP Peer Connectivity Profiles and perform the following
steps:
a) In the Peer Address field, enter the BGP peer address.
b) In the Local-AS Number field, enter the BGP AS number.
c) Click OK.
Step 4 Click the + icon to expand Interface Profiles (OSPF Interface Profiles if you enabled OSPF), and perform
the following actions:
a) In the Name field, enter a name for the interface profile.
b) Click Next.
c) In the Protocol Profiles dialog box, in the OSPF Policy field, choose an OSPF policy.
d) Click Next.
c) (Optional) In the Route Summarization Policy field, from the drop-down list, choose an existing route
summarization policy or create a new one as desired. Also click the check box for Export Route Control
Subnet.
The type of route summarization policy depends on the routing protocols that are enabled for the L3Out.
Step 15 Choose Create Route Map/Profile, and in the Create Route Map/Profile dialog box, perform the following
actions:
a) From the drop-down list on the Name field, choose default-import.
b) In the Type field, click Match Routing Policy Only. Click Submit.
Step 16 (Optional) To enable extra communities to use BGP, using the following steps:
a) Right-click Set Rules for Route Maps, and click Create Set Rules for a Route Map.
b) In the Create Set Rules for a Route Map dialog box, click the Add Communities field.
c) Follow the steps to assign multiple BGP communities per route prefix.
Step 17 To enable communications between the EPGs consuming the L3Out, create at least one filter and contract,
using the following steps:
a) In the Navigation pane, under the tenant consuming the L3Out, expand Contracts.
b) Right-click Filters and choose Create Filter.
c) In the Name field, enter a filter name.
A filter is essentially an Access Control List (ACL).
d) Click the + icon to expand Entries, to add a filter entry.
e) Add the Entry details.
For example, for a simple web filter, set criteria such as the following:
• EtherType—IP
• IP Protocol—tcp
• Destination Port Range From—Unspecified
• Destination Port Range To to https
f) Click Update.
g) In the Create Filter dialog box, click Submit.
Step 18 To add a contract, use the following steps:
a) Under Contracts, right-click Standard and choose Create Contract.
b) Enter the name of the contract.
c) Click the + icon to expand Subjects and add a subject to the contract.