Cisco 7600 Architecture
Cisco 7600 Architecture
Services Modules
Engines Distributed Security;
Supervisor 32 IPSEC, Firewall, IDS,
Supervisor 720 DoS Protection
Route Switch Processor 720
Enhanced FlexWAN
High-Density Ethernet Modules 7500 Parity and
High-Density GE and 10GE
PA Investment Protection
with Distributed, Line-rate Performance
MSFC3
GE Ports
The Multi-Layer Switch Feature Card
(MSFC) Is an Daughter Card Which Is The Supervisor, Itself, contains the Baseboard,
an Backplane Connections,
IOS-Based Routing Engine and Sup DRAM/Flash
Roles of the SP & RP on the MSFC3
• SP
–VLAN Trunking Protocol (VTP)
–Spanning Tree (STP)
–Cisco Discovery Protocol (CDP)
–Chassis and Power Management
–Switched Port Analyzer (SPAN)
–Broadcast Suppression
–Etherchannel
• RP
–Layer 3 routing protocols like OSPF,
EIGRP, BGP, etc
–Other layer 3 routed protocols like IPX
and Appletalk
–Manages the CLI user interface for
console and telnet users in normal
operating mode
Cisco 7600 Backplane
Crossbar Switch Fabric – Supervisor 720
Supervisor 720 incorporates an integrated Switch Fabric on the module that supports 18
x fabric channels.
Each fabric channel in this switch fabric is dual speed, supporting the channel at either
20-Gbps or 8-Gbps depending on the linecard that is used in the slot.
Switch Fabric
Crossbar Switch Fabric
Provides multiple conflict-free paths between switching
modules
Dedicated bandwidth per slot
In 7613:
One fabric channel slots 1–8
Two fabric channels slots 9–13
“Dual-fabric” modules not supported in slots 1–8 of 7613
Note: 7603 Chassis don’t Support 20G fabric cards
Supervisor 720 Integrated Switch Fabric
Slot 1
7604
Slot 2
8 7606
Slot 3 traces
12
Slot 4
traces 7609
SWITCH Slot 5
FABRIC 18
Slot 6 traces
Slot 7
Each CHASSIS gets 2
Slot 8 fabric traces per slot
Slot 9
Supervisor 720 Integrated Switch Fabric
Slot 1
Slot 2
Slot 3
Slot 4 The 7613 is different to
Slot 5 all other 7600 chassis 7613
Slot 6
Slot 7 It DOES NOT have dual 18
Slot 8 fabric channels per slot Traces
SWITCH are
FABRIC Slot 9
Slot 1 - 8 each have a split
single fabric channel across
Slot 10
13
Slot 11 Slot 9 - 13 each have slots
dual fabric channels
Slot 12
Slot 13
Monitoring Fabric Status and Utilization
• Cisco IOS: show fabric [active | channel-counters | errors | fpoe | medusa | status | switching-
mode | utilization]
2x GE Ports
1x Console 1x SFP only
1x SFP and RJ45
7600 Engines Comparison
*) since 2005, when ordering a 7600 with SUP720 and 12.2(33)SRA , a CF-adapter and a 512MByte CF-card (bootdisk) are shipped
instead of a Flash DIMM (bootflash)
**) Subscriber Scalability is planned to be enhanced even further in future SW releases
***) Limited orderability
Supervisor 32-10GE/PFC3 Architecture
Dual port ASICs to support two 10GE
interfaces
10GE 10/100/1000
Uplinks Uplink
MET
16 Gbps DBUS
Bus RBUS
EOBC
PFC Comparison
Hardware Feature 3B 3BXL 3C 3CXL
FIB TCAM 256K 1M 256K 1M
Adjacency Table 1M 1M 1M 1M
ACL LOU’s 64 64 64 64
• Compact fixed-
ME 6524GS-8S configuration Ethernet
switch for Carrier
Ethernet Access and
Aggregation layer
• MEF 9 and MEF 14
certified
• 2 SKUs available:
–ME 6524GS-8S
ME 6524GT-8S
24x Downlink 1GE +
8x1G Uplink
–ME 6524GT-8S
24x 10/100/1000 TX +
8x1GE Uplink
Cisco ME 6524 Ethernet Switch
Overview
Compact
RS232 (RJ-45) Flash Slot
Console Port 2 x USB Ports
Cisco ME 6524 Ethernet Switch
Architecture
8x GE SFP Uplinks 24x GE SFP or TX Downlink
RBUS
DBUS
3 4 RP CPU
Bus interface
SP CPU
3 5
Replication
L2/L3/L4 Engine ASIC
Cisco ME 6524 Ethernet Switch
Baseboard Components
Feature ME 6524
Switching Capacity 32 Gbps
Switch Processor
Route Processor
Booting Images on the Supervisor 720
Supervisor 720 MSFC3
SP Rommon starts SP CPU BootFlash: :
BootFlash
E-DBus
E-DBus
E-RBus
EOBC
SP Boot Process…
rommon 1 > b
Loading image, please wait ...
Self decompressing the image : SP = Supervisor Image
###########################################################
################### [OK]
Restricted Rights Legend
Use, duplication, or disclosure by the Government is….
Cisco Internetwork Operating System Software
IOS (tm) s72033_sp Software (s72033_sp-PSV-M), Version
12.2(18)SXD, RELEASE SOFTWARE (fc)
Technical Support: http://www.cisco.com/techsupportFirst Supervisor
Copyright (c) 1986-2004 by cisco Systems, Inc.
Compiled Wed 28-Jul-04 22:40 by cmong
Image text-base: 0x4002100C, data-base: 0x40FC8000
00:00:10: %PFREDUN-6-ACTIVE: Initializing as ACTIVE
processor
RP Boot Process….. RP rommon
IPv6 0 0%
IPv4 mcast 3 1%
IPv6 mcast 0 0%
ASIC Name :
Superman tycho on PFC3b (XL or not)
Supertycho on PFC3c (XL or not)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Shared BUS (16Gbps FD) moves traffic between classic
• Control channel (EOBC) modules. When fabric and classic modules are in the
• Route processor, MSFC chassis, fabric modules will either be in bus mode (not
• Switch processor, SP use fabric) or send packets to classic modules via
• Inband channel (IBC) supervisor (in truncated/compact modes)
• Replication Fabric capable modules in truncated & compact
• Recirculation modes (without DFC) still use BUS to send packet
• ASIC roles headers to the supervisor for forwarding decision – up
to 30M header/sec.
BUS can be stalled during fabric mode changes, OIRs
and module power up/downs. DFC-equipped
linecards not affected by BUS stalls
‘sh fabric switching-mode’ – fabric mode
‘sh catalyst6000 traffic’ – what is the BUS load
Sh platform hardware capacity fabric
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Moves packets between fabric-capable modules
• Switching Fabric (which are not in BUS mode)
• Control channel (EOBC)
• Route processor, MSFC Integrated on sup720 (fabric switchover == supervisor
switchover), no fabric on sup32
• Switch processor, SP
• Inband channel (IBC) Each chassis slot has 2 fabric channels, except 6513 in
• which only lower 5 slots have 2 channels and top 8
Replication
have 1 channel
• Recirculation
• ASIC roles WS-X65../WS-X68.. work at 8Gbps per channel
WS-X67.. – ES20 or ES+ work at 20Gbps per channel
Fabric header inserted into the packet by an ingress
linecard controls where fabric will send packet
Cards with DFC use only fabric (always in compact
mode) – not connected to the BUS
‘sh fabric’
‘sh platform hardware capacity fabric’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric
• Control channel (EOBC) Each fabric-capable module maybe in different modes
• Route processor, MSFC (in regards to how packets are flowing)
• Switch processor, SP - Compact (header goes over the bus to the
• Inband channel (IBC) forwarding engine, packet goes over the fabric
• Replication - Truncated same as compact, but uses longer header
• Recirculation when bus cards are present in the chassis
• ASIC roles
- Bus everything goes over the bus
- Cards with DFC are always in compact mode and require
fabric to work
sh fabric switching-mode
is using different nomenclature: compact, truncated
crossbar, cards with DFC dCEF
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Ethernet Out of Band Channel
• Switching Fabric
• Control channel (EOBC) Control plane connection between every linecard and
• the supervisor (shared 100/half duplex). Used for
Route processor, MSFC
control communication between Supervisor and
• Switch processor, SP Linecards
• Inband channel (IBC)
• Protocols used on EOBC: SCP (switch control protocol),
Replication
IPC, ICC
• Recirculation
• ASIC roles When supervisor needs to program an ASIC on a
linecard (for example to enable vlan on a port) it will
send a message to the linecard CPU over EOBC.
Linecards export statistics (forwarding, errrors etc) to
sup over EOBC
FIB is downloaded to PFC/DFCs via EOBC, TCAMs
information is conveyed to SP/DFCs via EOBC
‘sh eobc’ (on each module), ‘sh scp status’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Multilayer Switching Feature Card known as RP
• Switching Fabric
• Control channel (EOBC) Runs L3 protocols (OSPF/BGP/…) – computes RIBFIB
• then downloads it to SP/DFCs for programming into
Route processor, MSFC
forwarding TCAMs
• Switch processor, SP
• Inband channel (IBC) Runs management (snmp, telnet/ssh)
• Replication Does software forwarding (process/fast/CEF
• Recirculation switching…) – punted packets go to MSFC
• ASIC roles Boots after SP
sh ip interface brief
sh ip cef summary
sh ibc
sh ip traffic
sh process cpu sorted
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Separate (from MSFC) CPU and memory
• Control channel (EOBC)
• Controls chassis, power, modules, OIR
Route processor, MSFC
• Switch processor, SP Controls/monitors the Fabric and PFC
• Inband channel (IBC) Runs L2 protocols (Spanning Tree, UDLD, VTP, DTP)
• Replication Runs IOS, boots 1st then boots MSFC
• Recirculation
Exchanges heartbeats and pings (over EOBC) with
• ASIC roles every module to ensure integrity of the system
Accessed via ‘remote login switch’ or ‘remote
command switch’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Allows CPU to receive and send packet from/to
• Control channel (EOBC) network
• Route processor, MSFC MSFC and SP both have (separate) inband channels
• Switch processor, SP
• Inband channel (IBC) IBC is bandwidth is 1Gbps
• Replication Punted/L3 protocol/management packets are going
• Recirculation over IBC
• ASIC roles sh ibc RP IBC
‘test pm router counters’ switch side of RP IBC
remote command switch sh ibc SP IBC
‘test pm switch counters’ switch side of SP IBC
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Replication is a process of making copies of the
• Control channel (EOBC) packets (and rewriting packet if needed)
• Route processor, MSFC Replication is done by replication engine ASICs
• Switch processor, SP
• Inband channel (IBC) Replication engine ASIC is present on every fabric
capable card (except some service modules)
• Replication
• Recirculation For classic modules and those in bus mode the
• ASIC roles replication engine on active supervisor is used
Features such as SPAN and Multicast are using
replication
sh platform hardware central-rewrite performance (SRD + soft)
sh platform hardware central-rewrite drop (SRD + soft)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus Recirculation is a process of passing the packet several
• Switching Fabric times through the forwarding engine
• Control channel (EOBC) Recirculation is needed to implement complex
• Route processor, MSFC features requiring multiple lookups with packet
• Switch processor, SP modification in between
• Inband channel (IBC) Features such as MPLS, MVPN, GRE, Nat, … are using
• Replication recirculation
• Recirculation Between lookups the packet is buffered in (and
• ASIC roles modified by) replication/rewrite engine
Recirculated packet consumes forwarding capacity
according to number of recirculations
sh mls statistics
sh platform hardware central-rewrite performance (SRD +
soft)
sh platform hardware central-rewrite drop (SRD + soft)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus Port ASIC
• Switching Fabric MAC function, RX/TX from network, buffering, rewrite (in bus
• Control channel (EOBC) mode), internal header imposition/disposition
• Route processor, MSFC Rewrite/Replication ASIC
• Switch processor, SP Packet rewrite e.g. TTL etc, Replication for SPAN, multicast
• Inband channel (IBC) Bus/Forwarding engine/Fabric interface ASIC
• Replication Receive packets from Port ASIC interface with forwading
• Recirculation engine to make decision, send packet to bus/fabric/port-ASIC
• ASIC roles (often combined with rewrite ASIC)
Fabric connectivity ASIC
Serialize/Deserialize packets to send/receive over the
backplane links to/from the fabric. Connects to Fabric interface
ASIC
Fabric ASIC
Moves packets from ingress fabric ports to egress fabric ports.
Connects to Fabric connectivity ASIC (on linecard side) via
backplane fabric links
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric
• Control channel (EOBC) Fabric ASIC
• Route processor, MSFC • Backplane fabric connection
• Switch processor, SP • BUS connection
• Local (if DFC)
• Inband channel (IBC) • Shared
Fabric connectivity
• Replication ASIC
• Recirculation
• ASIC roles Replication Forwarding
BUS/Fabric interface Engine
ASIC ASICs
Classic LC : WS-X6148-GE-TX
CEF256 : WX-X6548-G-TX
WS-X6582-2PA
dCEF256: WS-X6816-GBIC
CEF720: WS-X6748-GE-TX
Classic Module
DBUS
Port ASICs for physical RBUS
connectivity, buffering, and
queueing Classic Module
Port
ASIC
48x10/100
Example: WS-X6148A-RJ-45
Classic Module
The classic module architecture is used in the modules that attach
Only to the shared bus (16 Gbps). These modules always use the PFC
On the supervisor to obtain a forwarding decision.
Architecture :
1. Several port ASICs provide front panel connectivity and also conect to
the shared bus to transport packets to the rest of the system.
2. The port ASIC also implements packet buffers and supports packet
queuing as packets are received and transmitted.
CEF256 Module
Example: WS-X6516-GBIC
CEF256
Module
Fabric
Interface
MET
Replication
Engine Port Port Port Port
ASIC ASIC ASIC ASIC
4xGE 4xGE 4xGE 4xGE
LCDBUS
LCRBUS
MET
Replication
Engine Port Port Port Port
ASIC ASIC ASIC ASIC
4xGE 4xGE 4xGE 4xGE
CEF256 / DCEF256 Module
CEF 256 has a fabric interface which connects
The DFC replicates the layer 2 / 3 forwarding logic of the PFc on the Sup
By using the same ASICs. It supports local L2 / L3 switching and also
Holds copies of the ACL defined for QoS / security. This means that the
When switching a packet locally, the DFC can inspect local security and
QoS policies defined on the switch and apply those policies to locally
Switched traffic.
CEF720 Module
Example: WS-X6748-SFP
DBUS
20Gbps Fabric 20Gbps Fabric
RBUS
Channel Channel
Combined fabric
interface and replication
engine
Transparent bus
interface
Layer3/4 Engine
for FIB/Adj,
ACL, QoS and Layer 2
NetFlow Engine for
lookups L2
lookups
CEF720 / DCEF720 Moduled
Similar to the CEF256 cards, these cards can work with or without a DFC3
Most of CEF720 cards are dual fabirc connected cards. On a dual
Fabirc connected card CEF720 module without a DFC3, there is a
Connection to classic system bus so that packets headers can be
Transmitted to supervisor for central forwarding lookup.
dCEF is divided into two halves (Complex A & Complex B). There is
No internal data path between the complexes.
Dual-core CPU
ES+ Series 4-Port 10GE Line Cards ES+ Series 40-Port GE Line Cards
ES+ Series 2-Port 10GE Line Cards ES+ Series 20-Port GE Line Cards
ES+ Product Family
7600-ES+40C3CXL
ES+
• Each ES+ board consists of one Baseboard, one Link Daughter card and one
Earl Daughter card.
NP-3C NP-3C
Link Daughter Cards
4X10(Longsword) 40x1(Urumi)
2X10(Gladius) 20X1(Katar)
Link FPGA
Link FPGA
NP-3C NP-3C NP-3C NP-3C
Link FPGA
Link FPGA
XFP-XFI XFP-XFI
XFP-XFI XFP-XFI
XFP-XFI
XFP-XFI
XFP XFP SFP SFP
10GE 10GE 10x1 10x1
SFP SFP SFP SFP GE GE
XFP XFP XFP XFP
10x1 10x1G 10x1 10x1
10GE 10GE 10GE 10GE
GE E GE GE
7600-ES20 Hardware
sor
c es tem
o s rd
Pr bsy terca
Su augh 7 .5 card
a rl hter
Link D E aug
Daugh
tercard
D
a rd
o
seb
B a
ES+ Modules
Hardware and Software requirement
• Hardware requirement
– Supported by all the Cisco 7600 series routers:
– 7604, 7606, 7609, 7613 router (not in slot 1-8) and 7606-S, 7609-S.
– 7600-ES+xx will be supported by all SUP720 models except PFC3A
– 7600-ES+xx will be supported with RSP720
– 7600-ES+xx will not be supported by SUP2, SUP32
• Software Requirement
– Supported from version 12.2(33)SRD.
– Combo Cards are supported from version 12.2(33)SRE
– CatOS and Hybrid images are not supported.
Cisco 7600 TOI
DBUS
RBUS
6
Switch Fabric
DBUS
SP RP
CPU CPU
RBUS
5
2 3L2 4 L3
Port Fabric
ASIC ASIC Engine Engine
PFC
Supervisor
Centralized Forwarding
Red
D
DBUS
RBUS
2 Source S
Fabric 4 8Gbps CEF256 Destination D
Interface Module A Blue VLAN
Red VLAN
LCDBUS
LCRBUS
Entire Packet
Port Port Packet Header
ASIC ASIC
1
Blue S
Distributed Forwarding
Red
D
Port Port
CEF720
ASIC ASIC DFC3
Supervisor Engine 720 L3/4 Module B
Engine w/DFC3
720Gbps Fabric Interface/ 5
PFC3 Switch 20Gbps Replication Layer 2
Fabric Engine Engine
20Gbps
CEF720 Source S
4 Module A
Fabric Interface/ 2 Destination D
Layer 2 w/DFC3
Replication 3
Engine Blue VLAN
Engine Red VLAN
L3/4
Port Port Engine Entire Packet
DFC3
ASIC ASIC Packet Header
1
Blue S
The DBUS Header – What is it?
• Each packet that is sent to the PFC ASICs for a forwarding decision is pre-pended by 32
Byte DBus header which describes the packet to be switched and signaling from the
Line Cards.
Packet
Packet Header
Header DBus
DBus
• Some examples
Non-DFC System Mixing w/ DFC-3B/XL Mixing C w. CXL Mixed but CXL only
RSP720-3C RSP720-3C RSP720-3CXL RSP720-3CXL
61xx 67xx w/ DFC3B 7600-ES20-GE3C 7600-ES20-GE3CXL
Hardware Fwd
PFC3BXL Tables
CEF720 Series FIB
dCEF720 Series
20
AFC3 or 20 20 Integrated
DFC3 Integrated DFC3 FIB
FIB 20 Switch Fabric 20
8
8
16 Gbps Switching Bus 8
7600(config)# control-plane
7600(config-cp)# service policy input <name> Control Plane Interface
PFC
(Data Plane)
Linecard Linecard
Control Plane Protection
• Each PFC/DFC enforces hardware
CoPP policy independently
– Software CoPP becomes the only point CPU
of centralized enforcement
• CoPP support for multicast traffic is Software Control
in software only Plane Policing
– Use hardware rate limiter in conjunction
with CoPP software protection
• CoPP support for broadcast traffic is
in software only
– Use ACLs, storm control, ARP “mls qos
police” policing mechanisms, and ingress
QoS policers in conjunction with CoPP
software protection HW Control HW Control HW Control
Plane Policing Plane Policing Plane Policing
DFC3 DFC3 PFC3
CEF No Route Packet with no route in FIB Partial Shortcut Partial Shortcut entries
IP Errors IP Checksum or length error Directly Connected Local m/cast on connected I/f
ICMP Redirect Packets requiring redirecting IP Options Multicast packet with options
ICMP ACL Drop Unreachable for admin deny V6 *,G M Bridge Partial Shortcut entries
RPF Failure Packets failing RPF check V6 S,G M Bridge Partial Shortcut entries
L3 Security BCAC, IPSec and Auth-Proxy V6 Route Control Partial Shortcut entries
ACL Input NAT, TCP Intercept, ACL Log and V6 Default Route Multicast packet with options
Reflexive ACL’s
ACL Output V6 Second Drop Multicast packet with options
VACL Logging CLI notice of denied VACL
PFC3/DFC3
Special
Cases Hardware Special
Traffic Rate-Limiters Case
to CPU Traffic
Software
“Control- CPU
Plane”
Matches Hardware
Policy “Control-Plane”
Configuring CoPP
Four Required Steps:
1. Define ACLs
– Classify traffic
2. Define class-maps
– Setup class of traffic
3. Define policy-map
– Assign QOS policy action to class of traffic (police, drop)
4. Apply CoPP policy to control plane “interface”
Monitoring CoPP
• “show access-list” displays hit counts on a per ACL entry
(ACE) basis
– The presence of hits indicates flows for that data type to the control plane as
expected
– Large numbers of packets or an unusually rapid rate increase in packets processed
may be suspicious and should be investigated
– Lack of packets may also indicate unusual behavior or that a rule may need to be
rewritten
• “show policy-map control-plane” is invaluable for reviewing and tuning
site-specific policies and troubleshooting CoPP
– Displays dynamic information about number of packets (and bytes) conforming or
exceeding each policy definition
– Useful for ensuring that appropriate traffic types and rates are reaching the route
processor
• Use SNMP queries to automate the process of reviewing service-policy
transmit and drop rates
– The Cisco QoS MIB (CISCO-CLASS-BASED-QOS-MIB) provides the primary
mechanisms for MQC-based policy monitoring via SNMP
Control Plane Protection
CoPP Support
• A new logical interface—the Control Plane Interface—has been introduced. A policer (service
policy) can be applied on that interface thus limiting the total volume of traffic destined to
the control plane. This mechanism is used to protect the operational integrity of the control
plane
CPU
(Control Plane)
Switch(config)#control-plane
Switch(config-cp)#service policy input <name> Control Plane Interface
Forwarding Plane
(Data Plane)
Linecard Linecard
Control Plane Protection
CoPP Deployment—Step 1
ip access-list extended coppacl-bgp
permit tcp host 192.168.1.1 host 10.1.1.1 eq bgp
permit tcp host 192.168.1.1 eq bgp host 10.1.1.1
• Step 1: Identify traffic of interest !
ip access-list extended coppacl-igp
and classify it into multiple traffic permit ospf any host 224.0.0.5
classes: permit ospf any host 224.0.0.6
permit ospf any any
– Routing (BGP,IGP (EIGRP, OSPF, ISIS) !
ip access-list extended coppacl-management
– Management (telnet, TACACS, ssh, permit tcp host 10.2.1.1 host 10.1.1.1 established
permit tcp 10.2.1.0 0.0.0.255 host 10.1.1.1 eq 22
SNMP, NTP) permit tcp 10.86.183.0 0.0.0.255 any eq telnet
permit udp host 10.2.2.2 host 10.1.1.1 eq snmp
– Reporting (SAA) & Monitoring permit udp host 10.2.2.3 host 10.1.1.1 eq ntp
!
(ICMP) ip access-list extended coppacl-reporting
permit icmp host 10.2.2.4 host 10.1.1.1 echo
– Critical applications and other !
traffic (HSRP, DHCP) ip access-list extended coppacl-monitoring
permit icmp any any ttl-exceeded
– Undesirable permit icmp any any port-unreachable
permit icmp any any echo-reply
– Default/Catch-All permit icmp any any echo
!
ip access-list extended coppacl-critical-app
permit ip any host 224.0.0.1
permit udp host 0.0.0.0 host 255.255.255.255 eq bootps
permit udp host 10.2.2.8 eq bootps any eq bootps
!
ip access-list extended coppacl-undesirable
permit udp any any eq 1434
Control Plane Protection
CoPP Deployment—Step 2
• Step 2: Associate the identified traffic with a class, and permit the traffic in each class
– Must enable QoS globally, else CoPP will not be applied in hardware
– Always apply a policing action for each class since the switch will ignore a class that does not have a corresponding policing
action (for example "police 31500000 conform-action transmit exceed-action drop"). Alternatively, both conform-action and
exceed-action could be set to transmit, but doing so will allocate a default policer as opposed to a dedicated policer with its
own hardware counters.
– HW CoPP classes are limited to one match per class-map
control-plane
service-policy input copp-policy
Control Plane Protection
CoPP Deployment—Step 3
Switch# show policy-map control-plane
• Step 3: Adjust Control Plane Interface
Service-policy input: copp-policy
classification, and apply <snip>
liberal CoPP policies for Hardware Counters:
class-map: copp-monitoring (match-all)
each class of traffic Match: access-group name coppacl-monitoring
police :
– show policy-map control- 30000000 bps 937000 limit 937000 extended limit
plane displays dynamic Earl in slot 5 :
information for monitoring 0 bytes
control plane policy. Statistics 5 minute offered rate 0 bps
aggregate-forwarded 0 bytes action: transmit
include rate information and exceeded 0 bytes action: drop
number of packets/ bytes aggregate-forward 0 bps exceed 0 bps
confirmed or exceeding each Earl in slot 7 :
traffic class 112512 bytes
– CoPP rates on Sup720 are bps 5 minute offered rate 3056 bps
aggregate-forwarded 112512 bytes action: transmit
—pps is not possible. exceeded 0 bytes action: drop
However, HWRL rates are in aggregate-forward 90008 bps exceed 0 bps
pps
Software Counters:
Class-map: copp-monitoring (match-all)
1036 packets, 128464 bytes
5 minute offered rate 4000 bps, drop rate 0 bps
Match: access-group name coppacl-monitoring
police:
cir 30000000 bps, bc 937500 bytes
conformed 1036 packets, 128464 bytes; action: transmit
exceeded 0 packets, 0 bytes; action: drop
conformed 4000 bps, exceed 0 bps
<snip>
Control Plane Protection
CoPP Deployment—Step 3 (Cont.)
Switch#sh access-list
• Step 3: Adjust Classification, Extended IP access list coppacl-bgp
10 permit tcp host 192.168.1.1 host 10.1.1.1 eq bgp
http://www.cisco.com/en/US/products/hw/switches/ps708/products_white_paper0900aecd802ca5d6.shtml
Control Plane Protection
CoPP Deployment Considerations - Summary
Other considerations:
• HW CoPP processing only for packets where HW FIB or HW ingress ACL determines
punting. HW egress ACL punts do not pass through CoPP.
• HW CoPP classes can only match what IP ACLs can handle in hardware
• HW CoPP supports only IPv4 and IPv6 unicast traffic
– No support for ARP ACLs, MAC ACLs…
• CoPP rates on Sup720 are bps—pps is not possible. However, HWRL rates are in pps
• CoPP is supported in ingress only
• Not supported today:
– SNMP support for CoPP
– ACL Log keyword support for CoPP
Additional Info and References
CoPP Best Practices
• http://www.cisco.com/web/about/security/intelligence/coppwp_gs.html
Protecting Cisco Catalyst 6500 Series Switches Using Control Plane Policing, Hardware
Rate Limiting, and Access-Control Lists
• http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11_553261.html
terminal length 0
show log
show clock
show tech
Show tech platform
High cpu - Architecture – performance
Sh proc cpu sorted Sh module
Sh ibc Sh fabric switching-mode
Sh msfc netint Sh run
Sh interface Sh ver
Sh ip traffic Sh platform hardware capacity
Debug netdr capture Sh mls statistics
Show netdr capture Sh tech-support platform
Sh tech-support platform earl
IPv4 Unicast
Sh platform tech unicast <dest-ip>
<mask>
Sh mls cef summ
Sh ip cef summ
Sh tech-support cef
Sh tech-support ipc
IPv4 Multicast -
Sh platform tech ipmulticast <group> <source>
Sh tech ipmulticast
Cisco 7600 TOI
Health-Monitoring
1) TestTransceiverIntegrity:
Port 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
----------------------------------------------------------------------------
U U . U . . U U . . U U . . U U U U U U U U U U
3) TestScratchRegister -------------> .
4) TestSynchedFabChannel -----------> .
<snip>
Generic Online Diagnostics
Recommendations
• Bootup diagnostics:
–Set level to complete
• On demand diagnostics:
–Use as a pre-deployment tool: run complete diagnostics
before putting hardware into production environment
–Use as a troubleshooting tool when suspecting
hardware failure
Si
• Scheduled diagnostics:
–Schedule key diagnostics tests periodically
–Schedule all non-disruptive tests periodically
• Health-monitoring diagnostics:
–Key tests running by default
–Enable additional non-disruptive tests for specific functionalities enabled in your network: IPv6,
MPLS, NAT
Q AND A