0% found this document useful (0 votes)
50 views

Cisco 7600 Architecture

Uploaded by

vikas20mehra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Cisco 7600 Architecture

Uploaded by

vikas20mehra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 107

Cisco 7600 Architecture Overview

7600 Components & Sup Architecture


Flexible Cisco 7600

Services Modules
Engines Distributed Security;
Supervisor 32 IPSEC, Firewall, IDS,
Supervisor 720 DoS Protection
Route Switch Processor 720

Enhanced FlexWAN
High-Density Ethernet Modules 7500 Parity and
High-Density GE and 10GE
PA Investment Protection
with Distributed, Line-rate Performance

High-Density Ethernet Service Modules SPA Interface Processors


High-Density GE and 10GE Modular Carriers Cards
with Rich QoS, Distributed, for WAN and Metro
Line-rate Performance Shared Port Adapters
Cisco 7600 Series
Chassis Form Factors

4-slot 6-slot 9-slot 13-slot


# of Slots 4 (horizontal)6 (horizontal) 9 (vertical) 13 (horizontal)
Height 8.75” (5RU) 12.25” (7RU) 33.5” (21RU) 30.15” (19RU)
Bandwidth 320 Gbps 480 Gbps 720 Gbps 720 Gbps
Performance Upto 144 Mpps Upto 240 Mpps Upto 400 Mpps Upto 400 Mpps
NEW!
Cisco 7600-S Series

 Improved
Improved Failover
Failover Mechanisms
Mechanisms
“Fast
“Fast Fabric
Fabric Sync”
Sync” (<100
(<100 ms
ms switchover)
switchover)

 Redundant
Redundant Internal
Internal Power
Power Supplies
Supplies
 Improved
Improved Internal
Internal Power
Power Supply
Supply (7603S)
(7603S)
 Supports
Supports existing
existing Power
Power Supplies
Supplies (7609S,7606S)
(7609S,7606S)
 Ability
Ability to
to use
use external
external enhanced
enhanced power
power (7609S)
(7609S)

 High
High speed
speed fan
fan modules
modules with
with 55 speeds
speeds
 Redundant
Redundant fanfan modules
modules (7609S)
(7609S)

 Up
Up to
to 750W
750W of
of power
power per
per slot
slot for
for Line
Line cards
cards

 Cooling
Cooling capacity
capacity of
of 600W-
600W- 750W*
750W* // slot
slot

 ETSI
ETSI Cabinet
Cabinet Specs
Specs Compliant
Compliant (7603S)
(7603S)

 Redundant
Redundant EOBC
EOBC (Ethernet
(Ethernet Out-of-Band
Out-of-Band channel)
channel)

Actual picture not shown


Power Management
• Supervisor, switching modules, daughter cards, and Powered
Devices (PDs) all require power
– Power allocation predetermined based on
Part Number
• Use the power calculator on cisco.com to determine power
requirements and minimum power supply
– http://www.cisco.com/go/powercalculator
• If insufficient power available, system powers down PDs,
then switching modules, then services modules
• Power Supplies are hot-swappable
• Any mix of power supply is supported (different Watts or AC
and DC)
Power Example
engine#sh power
system power redundancy mode = redundant
system power redundancy operationally = non-redundant
system power total = 4536.00 Watts (108.00 Amps @ 42V)
system power used = 3155.04 Watts (75.12 Amps @ 42V)
system power available = 1380.96 Watts (32.88 Amps @ 42V)
Power-Capacity PS-Fan Output Oper
PS Type Watts A @42V Status Status State
---- ------------------ ------- ------ ------ ------ -----
1 WS-CAC-2500W 2331.00 55.50 OK OK on
2 WS-CAC-6000W 5771.64 137.42 OK OK on
Pwr-Allocated Oper
Fan Type Watts A @42V State
---- ------------------ ------- ------ -----
1 FAN-MOD-09 241.50 5.75 OK
2 FAN-MOD-09 241.50 5.75 OK
Pwr-Requested Pwr-Allocated Admin Oper
Slot Card-Type Watts A @42V Watts A @42V State State
---- ------------------ ------- ------ ------- ------ ----- -----
1 7600-SIP-200 240.24 5.72 240.24 5.72 on on
2 7600-SIP-400 265.02 6.31 265.02 6.31 on on
3 7600-SIP-400 265.02 6.31 265.02 6.31 on on
4 WS-X6704-10GE 402.36 9.58 402.36 9.58 on on
5 WS-SUP720-3BXL 328.44 7.82 328.44 7.82 on on
6 unknown 328.44 7.82 328.44 7.82 on on
7 7600-SIP-200 240.24 5.72 240.24 5.72 on on
8 7600-SIP-200 240.24 5.72 240.24 5.72 on on
9 WS-X6748-SFP 362.04 8.62 362.04 8.62 on on
engine#
Command
To disable or enable redundancy (redundancy is enabled
by default) from global configuration mode,

power redundancy-mode combined | redundant

Load sharing and redundancy are enabled automatically:


no software configuration is required
Cisco 7600 Supervisor 720
The Policy Feature Card (PFC) Is a
Daughter Card Which Is an ASIC
Cross Bar / Switch Fabric
Complex for Hardware-Based
Forwarding and Features

MSFC3

GE Ports
The Multi-Layer Switch Feature Card
(MSFC) Is an Daughter Card Which Is The Supervisor, Itself, contains the Baseboard,
an Backplane Connections,
IOS-Based Routing Engine and Sup DRAM/Flash
Roles of the SP & RP on the MSFC3
• SP
–VLAN Trunking Protocol (VTP)
–Spanning Tree (STP)
–Cisco Discovery Protocol (CDP)
–Chassis and Power Management
–Switched Port Analyzer (SPAN)
–Broadcast Suppression
–Etherchannel

• RP
–Layer 3 routing protocols like OSPF,
EIGRP, BGP, etc
–Other layer 3 routed protocols like IPX
and Appletalk
–Manages the CLI user interface for
console and telnet users in normal
operating mode
Cisco 7600 Backplane
Crossbar Switch Fabric – Supervisor 720

Supervisor 720 incorporates an integrated Switch Fabric on the module that supports 18
x fabric channels.

Each fabric channel in this switch fabric is dual speed, supporting the channel at either
20-Gbps or 8-Gbps depending on the linecard that is used in the slot.

Switch Fabric
Crossbar Switch Fabric
 Provides multiple conflict-free paths between switching
modules
Dedicated bandwidth per slot

18 fabric channels in total

Two fabric channels per slot in 7603/7606/7609

In 7613:
One fabric channel slots 1–8
Two fabric channels slots 9–13
“Dual-fabric” modules not supported in slots 1–8 of 7613
Note: 7603 Chassis don’t Support 20G fabric cards
Supervisor 720 Integrated Switch Fabric

Slot 1
7604
Slot 2
8 7606
Slot 3 traces
12
Slot 4
traces 7609
SWITCH Slot 5
FABRIC 18
Slot 6 traces

Slot 7
Each CHASSIS gets 2
Slot 8 fabric traces per slot
Slot 9
Supervisor 720 Integrated Switch Fabric
Slot 1
Slot 2
Slot 3
Slot 4 The 7613 is different to
Slot 5 all other 7600 chassis 7613
Slot 6
Slot 7 It DOES NOT have dual 18
Slot 8 fabric channels per slot Traces
SWITCH are
FABRIC Slot 9
Slot 1 - 8 each have a split
single fabric channel across
Slot 10
13
Slot 11 Slot 9 - 13 each have slots
dual fabric channels
Slot 12

Slot 13
Monitoring Fabric Status and Utilization
• Cisco IOS: show fabric [active | channel-counters | errors | fpoe | medusa | status | switching-
mode | utilization]

–NPE1#show fabric utilization


– slot channel speed Ingress % Egress %
– 1 0 8G 22 23
– 2 0 8G 4 9
– 3 0 20G 0 1
– 3 1 20G 11 12
– 4 0 20G 0 1
– 4 1 20G 10 13
– 6 0 20G 0 1
The Backplanes
Two backplanes exist in the Cisco 7600, the “Classic BUS” and the “Switch Fabric”…
Classic
Classic BUS
BUS Switch
Switch Fabric
Fabric
Backplane
Backplane Type
Type BUS
BUS Crossbar
Crossbar
Supported
Supported by
by Sup720
Sup720 Yes
Yes Yes
Yes
Supported
Supported by
by Sup32
Sup32 Yes
Yes No
No
Speed
Speed 16Gb
16Gb 8Gb
8Gb or
or 20Gb
20Gb
Full
Full Duplex
Duplex Yes
Yes Yes
Yes
Linecard
Linecard Connection
Connection Single
Single Connection
Connection Single
Single or
or Dual
Dual Channel
Channel
Yes
Yes -- All
All modules
modules connect
connect No
No -- Each
Each module
module has
has
Backplane
Backplane is
is Shared
Shared Medium?
Medium? to
to same
same BUSBUS discrete
discrete connection(s)
connection(s)
Supports
Supports Classic
Classic Linecard
Linecard Yes
Yes No
No
Supports
Supports CEF256
CEF256 Linecard
Linecard Yes
Yes Yes
Yes
Supports
Supports CEF720
CEF720 Linecard
Linecard No
No Yes
Yes
Supports
Supports Linecard
Linecard with
with DFC
DFC No
No Yes
Yes
Cisco 7600 Linecards
Linecard Slot Options with Supervisor 720

7603-S* 7604 7606 7609 7613


Slot 1 Sup or LC Sup or LC LC Only LC Only LC Only Slot 1
Slot 2 Sup or LC Sup or LC LC Only LC Only LC Only Slot 2
Slot 3 LC Only LC Only LC Only LC Only LC Only Slot 3
Slot 4 LC Only LC Only LC Only LC Only Slot 4
Slot 5 Sup or LC Sup or LC LC Only Slot 5
Slot 6 Sup or LC Sup or LC LC Only Slot 6
Slot 7 LC Only Sup or LC Slot 7
Slot 8 LC Only Sup or LC Slot 8
Slot 9 LC Only LC Only Slot 9
Slot 10 LC Only Slot 10
Different Supervisor Slots for different
Slot 11 chassis options… LC Only Slot 11
Slot 12 LC Only Dual Fabric Connectivity LC Only Slot 12
Slot 13 LC Only Single Fabric Connectivity LC Only Slot 13
RSP720 Component Facts
720Gbits Non-blocking
Crossbar Switch Fabric Forwarding Plane
Daughter-card
PFC-3C or PFC-3CXL
Control Plane
Daughter-card
(MSFC4)

2x Compact Flash Slots

2x GE Ports
1x Console 1x SFP only
1x SFP and RJ45
7600 Engines Comparison

SUP720 RSP720 RSP720-10GE***


Control Plane MSFC3 MSFC4 MSFC4
Ctrl Plane CPU 600Mhz MIPS 1.2GHz PowerPC 1.2GHz PowerPC
DRAM 1GByte (DDR) Up to 4GByte (DDR2) Up to 4GByte (DDR2)
NVRAM 2Mbyte 4MByte 4MByte
Bootflash/bootdisk 64MByte 512MByte 512MByte
Forwarding Plane PFC3B, PFC3BXL PFC3C, PFC3CXL PFC3C, PFC3CXL
MAC (CAM) Table Size
32k/64k 80k/96k 80k/96k
(pract./theor.)
IP Subscriber
x 32k 32k
Termination
IP Forwarding 30Mpps 30Mpps 30Mpps
MPLS Forwarding 20Mpps 20Mpps 20Mpps

*) since 2005, when ordering a 7600 with SUP720 and 12.2(33)SRA , a CF-adapter and a 512MByte CF-card (bootdisk) are shipped
instead of a Flash DIMM (bootflash)
**) Subscriber Scalability is planned to be enhanced even further in future SW releases
***) Limited orderability
Supervisor 32-10GE/PFC3 Architecture
Dual port ASICs to support two 10GE
interfaces

10GE 10/100/1000
Uplinks Uplink

Supervisor Engine 32 Baseboard WS-SUP32-GE-3B

Counter QoS FIB


SP ADJ
DRAM SP CPU
CPU 1 Gbps FPGA TCAM TCAM
Port Port
1 Gbps
ASIC ASIC
DRAM
RP
RP CPU
CPU ACL L3/4 NetFlow
FPGA
TCAM Engine
MSFC2a Daughter Card
MUX
L2 Engine PFC3
Replication L2 Daughter
Engine
CAM Card

MET

16 Gbps DBUS
Bus RBUS
EOBC
PFC Comparison
Hardware Feature 3B 3BXL 3C 3CXL
FIB TCAM 256K 1M 256K 1M

Adjacency Table 1M 1M 1M 1M

Netflow Table 128K 256K 128K 256K

MAC (CAM) Table (theoretical) 64K 64K 96K 96K

IPV6 H/W H/W H/W H/W

Native MPLS Yes Yes Yes Yes

EoMPLS Yes Yes Yes Yes


H/W + QoS H/W + QoS H/W + QoS H/W + QoS
Tunnels
Policies Policies Policies Policies
ACE Counters Yes Yes Yes Yes

ACL Labels 4000 4000 4000 4000

ACL Masks (IPv4/IPv6) 4K/1K 4K/1K 4K/2K 4K/2K

ACL Entries (IPv4/IPv6) 32K/8K 32K/8K 32K/16K 32K/16K

ACL LOU’s 64 64 64 64

Hash of VLAN ID in EtherChannel No No Yes Yes


Cisco ME 6524 Ethernet Switch
ME 6524GT-8S and ME 6524GS-8S

• Compact fixed-
ME 6524GS-8S configuration Ethernet
switch for Carrier
Ethernet Access and
Aggregation layer
• MEF 9 and MEF 14
certified
• 2 SKUs available:
–ME 6524GS-8S
ME 6524GT-8S
24x Downlink 1GE +
8x1G Uplink
–ME 6524GT-8S
24x 10/100/1000 TX +
8x1GE Uplink
Cisco ME 6524 Ethernet Switch
Overview

• Performances: 15 Mpps (IPv4), 32-Gbps switching capacity


• Layer 2 switching (QinQ, CoS mapping)
• Up to 96,000 MAC address entries
• IPv6 and MPLS in hardware
• IP Multicast in hardware (IGMPv3, PIM-SM, PIM-SSM,
bi-dir PIM)
• L3 routing: Up to 256k routes
• Netflow: Up to128k entries
• Security: VACL, 802.1x
Cisco ME 6524 Ethernet Switch
Chassis and Main Components

• Size: 1.5 RU, 19" Depth MSFC2A Daughterboard Power


Fan Tray Supplies
• Redundant power
supply* Fan Tray
• External Compact Connector Bracket
Flash slot
• 2 USB slots
(1 host and 1 device)
• 24 Downlink + 8 uplink
ports PFC3C
• 1 RS232 console port 24 Downlink Ports Daughterboard
(RJ45)
Motherboard
8 Uplink Ports

*DC currently available, AC in October 2007


Cisco ME-C6524GS-8S Ethernet Switch
Front Panel

24 Downlinks Ports 8 Uplink Ports


SFP Gigabit Ethernet SFP Gigabit Ethernet
Gig 1/1-24 Giga 1/25-32

Compact
RS232 (RJ-45) Flash Slot
Console Port 2 x USB Ports
Cisco ME 6524 Ethernet Switch
Architecture
8x GE SFP Uplinks 24x GE SFP or TX Downlink

Stub Stub Stub Stub Stub Stub Stub Stub


Uplink Port ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC
ASIC PORT ASIC
Port Port
ASIC ASIC
Same
Forwarding
Engine as
PFC3C
RBUS
DBUS
Multilayer
RP CPU Switching and
Bus Interface
Routing
(Similar to
SP CPU MSFC2A)
Replication
L2/L3/L4 Engine ASIC
Integrated MSFC2A
PFC3c
Cisco ME 6524 Ethernet Switch
Architecture—Packet Walk-Through

8x GE SFP Uplinks 24x GE SFP or TX Downlink


1
Stub Stub Stub Stub Stub Stub Stub Stub
ASIC ASIC ASIC ASIC ASIC ASIC ASIC ASIC
7
PORT ASIC 2Port Port
6 ASIC ASIC
3 5 3 5

RBUS
DBUS
3 4 RP CPU
Bus interface

SP CPU
3 5
Replication
L2/L3/L4 Engine ASIC
Cisco ME 6524 Ethernet Switch
Baseboard Components

Feature ME 6524
Switching Capacity 32 Gbps

Removable Storage Compact Flash Type 1 or 2 Socket (Up to 1 GB)


Yes (2 USB 1.1)
USB Ports
Host and Device
Feature SP RP
256 MB (Default) 512 MB
DRAM
Upgrade to 512 MB or 1 GB Upgrade to 1 GB
NVRAM 2 MB 512 KB
Bootflash 128 MB 64 MB
IOS Upgrade
The Multilayer Switch Feature Card (MSFC3) on the Supervisor 720 contains both the Route
Processor (RP) and the Switch Processor (SP)…

Both the RP and the SP have their


SP Bootflash RP Bootflash own set of Flash memory (referred
to as bootflash)

Switch Processor

Route Processor
Booting Images on the Supervisor 720
Supervisor 720 MSFC3
SP Rommon starts SP CPU BootFlash: :
BootFlash

SP Rommon boots bundled Image


SP Bootloader puts SP Image in RP
DRAM and boots SP code. RP
RPPacket
Packet RP
RP
RPCPU
CPU DRAM
DRAM
Parsing
Parsing
Release RP Rommon for CPU Init
RP Rommon Issues SCP
FindMaster() to find the SP
SP SP
SP Downloads RP Code across the SPPacket
Packet SP
SPCPU
CPU
SP
Parsing DRAM
DRAM
EOBC to the RP for booting. Parsing
RP Code is loaded into DRAM,
decompressed and booted IOS Bundled Image
SP Code RP Code LCs Sup-BootFlash:
Sup-BootFlash:
SP Inits & downloads LC Images

E-DBus
E-DBus
E-RBus
EOBC
SP Boot Process…
rommon 1 > b
Loading image, please wait ...
Self decompressing the image : SP = Supervisor Image
###########################################################
################### [OK]
Restricted Rights Legend
Use, duplication, or disclosure by the Government is….
Cisco Internetwork Operating System Software
IOS (tm) s72033_sp Software (s72033_sp-PSV-M), Version
12.2(18)SXD, RELEASE SOFTWARE (fc)
Technical Support: http://www.cisco.com/techsupportFirst Supervisor
Copyright (c) 1986-2004 by cisco Systems, Inc.
Compiled Wed 28-Jul-04 22:40 by cmong
Image text-base: 0x4002100C, data-base: 0x40FC8000
00:00:10: %PFREDUN-6-ACTIVE: Initializing as ACTIVE
processor
RP Boot Process….. RP rommon

00:00:14: %OIR-SP-6-CONSOLE: Changing console ownership to route processor


System Bootstrap, Version 12.2(17r)S2, RELEASE SOFTWARE (fc1)
Download Start
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!
Download Completed! Booting the image.
Self decompressing the image : ######## OK]
IOS (tm) s72033_rp Software (s72033_rp-PSV-M), Version 12.2(18)SXD, RELEASE
SOFTWARE (fc)
cisco WS-C7609-NEB (R7000) processor (revision 1.0) with 983008K/65536K bytes of
memory.
SR71000 CPU at 600Mhz, Implementation 0x504, Rev 1.2, 512KB L2 Cache
Last reset from power-on
X.25 software, Version 3.0.0.
1 Enhanced FlexWAN controller (1 POS)(1 Channelized OC3/STM-1).
3 GE-WAN controllers (12 GE-WAN Ports).
Monitoring PFC3 Resources
engine#show platform hardware capacity forwarding
L2 Forwarding Resources
MAC Table usage: Module Collisions Total Used %Used
4 0 65536 34 1%
5 0 65536 34 1%
9 0 65536 34 1%

VPN CAM usage: Total Used %Used


512 0 0%
L3 Forwarding Resources
FIB TCAM usage: Total Used %Used
72 bits (IPv4, MPLS, EoM) 524288 63 1%
144 bits (IP mcast, IPv6) 262144 3 1%

detail: Protocol Used %Used


IPv4 57 1%
MPLS 6 1%
EoM 0 0%

IPv6 0 0%
IPv4 mcast 3 1%
IPv6 mcast 0 0%

Adjacency usage: Total Used %Used


1048576 174 1%

Forwarding engine load:


Module pps peak-pps peak-time
4 608767 2976194 23:40:09 UTC Wed Oct 18 2006
5 27 683 20:44:54 UTC Fri Sep 29 2006
9 0 1488294 23:49:31 UTC Wed Oct 18 2006
engine#
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Forwarding engines has 2 (or 1 in PFC3C) ASICs and
• Switching Fabric make forwarding (FIB, Netflow, ACL) decisions, ACL
• Control channel (EOBC) and QOS decisions
• Route processor, MSFC
Ingress forwarding engine always makes the decision
• Switch processor, SP (including egress ACL/QOS) – always look at the
• Inband channel (IBC) ingress forwarding engine
• Replication
If no DFC on ingress LC then Central PFC is used.
• Recirculation
• ASIC roles To see if there are DFCs – ‘sh module’
‘sh mls statistics’
‘sh platform hardware capacity forwarding’

ASIC Name :
Superman tycho on PFC3b (XL or not)
Supertycho on PFC3c (XL or not)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Shared BUS (16Gbps FD) moves traffic between classic
• Control channel (EOBC) modules. When fabric and classic modules are in the
• Route processor, MSFC chassis, fabric modules will either be in bus mode (not
• Switch processor, SP use fabric) or send packets to classic modules via
• Inband channel (IBC) supervisor (in truncated/compact modes)
• Replication Fabric capable modules in truncated & compact
• Recirculation modes (without DFC) still use BUS to send packet
• ASIC roles headers to the supervisor for forwarding decision – up
to 30M header/sec.
BUS can be stalled during fabric mode changes, OIRs
and module power up/downs. DFC-equipped
linecards not affected by BUS stalls
‘sh fabric switching-mode’ – fabric mode
‘sh catalyst6000 traffic’ – what is the BUS load
Sh platform hardware capacity fabric
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Moves packets between fabric-capable modules
• Switching Fabric (which are not in BUS mode)
• Control channel (EOBC)
• Route processor, MSFC Integrated on sup720 (fabric switchover == supervisor
switchover), no fabric on sup32
• Switch processor, SP
• Inband channel (IBC) Each chassis slot has 2 fabric channels, except 6513 in
• which only lower 5 slots have 2 channels and top 8
Replication
have 1 channel
• Recirculation
• ASIC roles WS-X65../WS-X68.. work at 8Gbps per channel
WS-X67.. – ES20 or ES+ work at 20Gbps per channel
Fabric header inserted into the packet by an ingress
linecard controls where fabric will send packet
Cards with DFC use only fabric (always in compact
mode) – not connected to the BUS
‘sh fabric’
‘sh platform hardware capacity fabric’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric
• Control channel (EOBC) Each fabric-capable module maybe in different modes
• Route processor, MSFC (in regards to how packets are flowing)
• Switch processor, SP - Compact (header goes over the bus to the
• Inband channel (IBC) forwarding engine, packet goes over the fabric
• Replication - Truncated same as compact, but uses longer header
• Recirculation  when bus cards are present in the chassis
• ASIC roles
- Bus  everything goes over the bus
- Cards with DFC are always in compact mode and require
fabric to work
sh fabric switching-mode
is using different nomenclature: compact, truncated 
crossbar, cards with DFC  dCEF
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Ethernet Out of Band Channel
• Switching Fabric
• Control channel (EOBC) Control plane connection between every linecard and
• the supervisor (shared 100/half duplex). Used for
Route processor, MSFC
control communication between Supervisor and
• Switch processor, SP Linecards
• Inband channel (IBC)
• Protocols used on EOBC: SCP (switch control protocol),
Replication
IPC, ICC
• Recirculation
• ASIC roles When supervisor needs to program an ASIC on a
linecard (for example to enable vlan on a port) it will
send a message to the linecard CPU over EOBC.
Linecards export statistics (forwarding, errrors etc) to
sup over EOBC
FIB is downloaded to PFC/DFCs via EOBC, TCAMs
information is conveyed to SP/DFCs via EOBC
‘sh eobc’ (on each module), ‘sh scp status’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
Multilayer Switching Feature Card known as RP
• Switching Fabric
• Control channel (EOBC) Runs L3 protocols (OSPF/BGP/…) – computes RIBFIB
• then downloads it to SP/DFCs for programming into
Route processor, MSFC
forwarding TCAMs
• Switch processor, SP
• Inband channel (IBC) Runs management (snmp, telnet/ssh)
• Replication Does software forwarding (process/fast/CEF
• Recirculation switching…) – punted packets go to MSFC
• ASIC roles Boots after SP
sh ip interface brief
sh ip cef summary
sh ibc
sh ip traffic
sh process cpu sorted
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Separate (from MSFC) CPU and memory
• Control channel (EOBC)
• Controls chassis, power, modules, OIR
Route processor, MSFC
• Switch processor, SP Controls/monitors the Fabric and PFC
• Inband channel (IBC) Runs L2 protocols (Spanning Tree, UDLD, VTP, DTP)
• Replication Runs IOS, boots 1st then boots MSFC
• Recirculation
Exchanges heartbeats and pings (over EOBC) with
• ASIC roles every module to ensure integrity of the system
Accessed via ‘remote login switch’ or ‘remote
command switch’
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Allows CPU to receive and send packet from/to
• Control channel (EOBC) network
• Route processor, MSFC MSFC and SP both have (separate) inband channels
• Switch processor, SP
• Inband channel (IBC) IBC is bandwidth is 1Gbps
• Replication Punted/L3 protocol/management packets are going
• Recirculation over IBC
• ASIC roles sh ibc  RP IBC
‘test pm router counters’  switch side of RP IBC
remote command switch sh ibc  SP IBC
‘test pm switch counters’  switch side of SP IBC
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric Replication is a process of making copies of the
• Control channel (EOBC) packets (and rewriting packet if needed)
• Route processor, MSFC Replication is done by replication engine ASICs
• Switch processor, SP
• Inband channel (IBC) Replication engine ASIC is present on every fabric
capable card (except some service modules)
• Replication
• Recirculation For classic modules and those in bus mode the
• ASIC roles replication engine on active supervisor is used
Features such as SPAN and Multicast are using
replication
sh platform hardware central-rewrite performance (SRD + soft)
sh platform hardware central-rewrite drop (SRD + soft)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus Recirculation is a process of passing the packet several
• Switching Fabric times through the forwarding engine
• Control channel (EOBC) Recirculation is needed to implement complex
• Route processor, MSFC features requiring multiple lookups with packet
• Switch processor, SP modification in between
• Inband channel (IBC) Features such as MPLS, MVPN, GRE, Nat, … are using
• Replication recirculation
• Recirculation Between lookups the packet is buffered in (and
• ASIC roles modified by) replication/rewrite engine
Recirculated packet consumes forwarding capacity
according to number of recirculations
sh mls statistics
sh platform hardware central-rewrite performance (SRD +
soft)
sh platform hardware central-rewrite drop (SRD + soft)
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus Port ASIC
• Switching Fabric MAC function, RX/TX from network, buffering, rewrite (in bus
• Control channel (EOBC) mode), internal header imposition/disposition
• Route processor, MSFC Rewrite/Replication ASIC
• Switch processor, SP Packet rewrite e.g. TTL etc, Replication for SPAN, multicast
• Inband channel (IBC) Bus/Forwarding engine/Fabric interface ASIC
• Replication Receive packets from Port ASIC interface with forwading
• Recirculation engine to make decision, send packet to bus/fabric/port-ASIC
• ASIC roles (often combined with rewrite ASIC)
Fabric connectivity ASIC
Serialize/Deserialize packets to send/receive over the
backplane links to/from the fabric. Connects to Fabric interface
ASIC
Fabric ASIC
Moves packets from ingress fabric ports to egress fabric ports.
Connects to Fabric connectivity ASIC (on linecard side) via
backplane fabric links
Important concepts review
• Forwarding engine (EARL/PFC)
• Distributed Forwarding Engine (DFC)
• Shared Bus
• Switching Fabric
• Control channel (EOBC) Fabric ASIC
• Route processor, MSFC • Backplane fabric connection
• Switch processor, SP • BUS connection
• Local (if DFC)
• Inband channel (IBC) • Shared
Fabric connectivity
• Replication ASIC
• Recirculation
• ASIC roles Replication Forwarding
BUS/Fabric interface Engine
ASIC ASICs

Port ASIC • Front panel ports


7600 Line Cards

Lines Cards and Modules – Quick Overview


7600 Line Cards

The SUP 720 supports

Classic line cards


CEF 256 / dCEG 256 LC
CEF 720 / dCEF 720 LC

Classic LC : WS-X6148-GE-TX
CEF256 : WX-X6548-G-TX
WS-X6582-2PA
dCEF256: WS-X6816-GBIC
CEF720: WS-X6748-GE-TX

dCEF720: CEF720 LC with an DFC daugther card.


Classic Module
DBUS
RBUS
Example: WS-X6416-GBIC

Classic Module

Port Port Port Port


ASIC ASIC ASIC ASIC
4xGE 4xGE 4xGE 4xGE

DBUS
Port ASICs for physical RBUS
connectivity, buffering, and
queueing Classic Module

Port
ASIC

48x10/100
Example: WS-X6148A-RJ-45
Classic Module
The classic module architecture is used in the modules that attach
Only to the shared bus (16 Gbps). These modules always use the PFC
On the supervisor to obtain a forwarding decision.

Classic modules include LC without fabric connetion.

Architecture :

1. Several port ASICs provide front panel connectivity and also conect to
the shared bus to transport packets to the rest of the system.

2. The port ASIC also implements packet buffers and supports packet
queuing as packets are received and transmitted.
CEF256 Module
Example: WS-X6516-GBIC

DBUS 8Gbps Fabric


RBUS Channel

CEF256
Module
Fabric
Interface

Fabric interface to LCDBUS


interface with fabric LCRBUS
and bus

MET
Replication
Engine Port Port Port Port
ASIC ASIC ASIC ASIC
4xGE 4xGE 4xGE 4xGE

Replication engine for local


Local linecard bus
SPAN/multicast replication
for ASIC
interconnection
CEF256 Module with DFC
Example: WS-X6516-GBIC with WS-F6K-DFC
Layer 2/4 Engine for L2
and ACL/QoS lookups
8Gbps Fabric
Channel Layer 3 Engine
for FIB/Adj and
CEF256 NetFlow
Layer 2/4 L3 Module lookups
Fabric
Engine Engine with DFC
Interface
DFC

LCDBUS
LCRBUS

MET
Replication
Engine Port Port Port Port
ASIC ASIC ASIC ASIC
4xGE 4xGE 4xGE 4xGE
CEF256 / DCEF256 Module
CEF 256 has a fabric interface which connects

Local bus to the standard shared system bus (DBUS/RBUS)


Connects to the switch fabric through a dedicated 8 Gbps fabric channel.

dCEF256 architecture is similar to the CEF256 architecture , except it uses


A DFC . A DFC is used to locally switch packets arriving at any of the local
Module ports without having to forward the packet to the supervisor for a
Switching decision.

The DFC replicates the layer 2 / 3 forwarding logic of the PFc on the Sup
By using the same ASICs. It supports local L2 / L3 switching and also
Holds copies of the ACL defined for QoS / security. This means that the
When switching a packet locally, the DFC can inspect local security and
QoS policies defined on the switch and apply those policies to locally
Switched traffic.
CEF720 Module
Example: WS-X6748-SFP

DBUS
20Gbps Fabric 20Gbps Fabric
RBUS
Channel Channel

Complex A Complex B CEF720


Module
Fabric Bus Bus Fabric
Interface & Interface Interface Interface &
MET Replication Replication MET

Engine CFC Engine

Port Port Port Port


ASIC ASIC ASIC ASIC
12xGE 12xGE 12xGE 12xGE

Combined fabric
interface and replication
engine
Transparent bus
interface

Bus interface for control data only!!


CEF720 Module with DFC3
Example: WS-X6748-SFP with WS-F6700-DFC3B

20Gbps Fabric 20Gbps Fabric


Channel Channel

Complex A Complex B CEF720


Module
Fabric Layer 2 Layer 2 Fabric
with DFC3
Interface & Engine Engine Interface &
MET Replication Replication MET

Engine L3/4 Engine


Engine
Port Port DFC3 Port Port
ASIC ASIC ASIC ASIC
12xGE 12xGE 12xGE 12xGE

Layer3/4 Engine
for FIB/Adj,
ACL, QoS and Layer 2
NetFlow Engine for
lookups L2
lookups
CEF720 / DCEF720 Moduled
Similar to the CEF256 cards, these cards can work with or without a DFC3
Most of CEF720 cards are dual fabirc connected cards. On a dual
Fabirc connected card CEF720 module without a DFC3, there is a
Connection to classic system bus so that packets headers can be
Transmitted to supervisor for central forwarding lookup.

dCEF is divided into two halves (Complex A & Complex B). There is
No internal data path between the complexes.

Packets switched with in a complex are locally switched and do not


Leave the line card. Packets that are switched from one complex to the
Complex on the same line card are forwarded through cross bar switch
Fabric. Packets destined for ports outside the line card are also switched
Through the crossbar fabric.
SIP – SPA Interface Processors
7600-SIP-200 7600-SIP-400 7600-SIP-600
1.1 Mpps 6.2 Mpps 24 Mpps
622 Mbps with Services 4 Gbps with Services 10 Gbps with Services
98% 7500 Feature Parity 4 SPA Bays 1 SPA per SIP
4 SPA Bays 20 Gbps fabric channel 20 Gbps fabric channel
20 Gbps fabric channel And connectivity to Shared Bus. DFC3BXL + NP, 10G
and connectivity to Shaping & Queuing ASIC
Shared Bus.

Dual-core CPU

Hierarchical Shaping Security ACLs


Dual-rate, 3-Color Policing Policing
cRTP CBWFQ + LLQ with WRED Classification, Marking
LFI – ATM, FR, MLPPP CBWFQ/LLQ
Classification, Marking AToM Functionality, WRED
CBWFQ/LLQ VPLS and H-VPLS
WRED L3MPLS VPN o GRE
Supported SPAs
SIP200 SIP400 SIP600
8p E1/T1 yes no no
4p E3/T3 yes no no
4p ChDS3 yes no no
1p ChOC3/STM1 yes no no
2/4p OC3/STM1 POS yes yes no
1p OC12/STM4 POS no yes no
1p OC48/STM16 POS no yes no
2/4p OC48/STM16 POS no no yes
1p OC192/STM64 POS no no yes
4p OC3/STM1 ATM yes yes no
1p OC12/STM4 ATM no yes No
1p OC48/STM16 ATM no SRB No
24p E1/T1 CEM/ATM no SRB no
1p OC3/STM1 CEM/ATM no SRB no
4/8p FE yes no no
2p GE / 2+2p GEv2 no Yes / SRB no
5p GE v2 no SRB SRB
5/10p GE no no Yes
1p 10GE / 10GEv2 no no Yes / SRB
ES+ Product Family

ES+ Series 4-Port 10GE Line Cards ES+ Series 40-Port GE Line Cards

ES+ Series 2-Port 10GE Line Cards ES+ Series 20-Port GE Line Cards
ES+ Product Family

ES+ Series 10-Port 1GE plus 1-Port 10GE Line Cards

ES+ Series 20-Port GE Plus 2-Port 10GE Line Cards


ES+ Overview – Flavors & Terminology
Excalibur (ES+)

1G 10G Ginsu [10G-OTN] Combo(1G & 10G)

7600-ES+20G3C 7600-ES+2TG3C 7600-ES+ITU-2TG

7600-ES+20G3CXL 7600-ES+2TG3CXL 7600-ES+ITU-4TG

7600-ES+40G3C 7600-ES+4TG3C 7600-ES+20C3C


7600-ES+20C3CXL
7600-ES+40G3CXL 7600-ES+4TG3CXL
7600-ES+40C3C

7600-ES+40C3CXL
ES+
• Each ES+ board consists of one Baseboard, one Link Daughter card and one
Earl Daughter card.

• Baseboard has no flavors.

• Link Card flavors


a. 4 ports of 10 Gigabit Ethernet (XFP form factor) - Longsword
b. 40 ports of 1 Gigabit Ethernet (SFP form factor) - Urumi
c. 2 ports of 10 Gigabit Ethernet (XFP form factor) - Gladius
d. 20 ports of 1 Gigabit Ethernet (SFP form factor) - Katar
e. 2 ports of 10 OTN Gigabit Ethernet (SFP form factor) - Ginsu20
f. 4 ports of 10 OTN Gigabit Ethernet (SFP form factor) - Ginsu40
g. 2 ports of 10 Gigabit Ethernet (XFP form factor)
20 ports of 1 Gigabit Ethernet (SFP form factor) - Spatha
h. 1 port of 10 Gigabit Ethernet (SFP form factor)
10 ports of 1 Gigabit Ethernet (XFP form factor) - Pugio

• Earl Card flavors


a. 3C (Lite)
b. 3CXL
Board Details (Logical view)

SSA SSA BaseBoard


E7.5 Daughter Card
Metropolis Metropolis
LCP
EARL7.5
Selene Selene

NP-3C NP-3C
Link Daughter Cards
4X10(Longsword) 40x1(Urumi)
2X10(Gladius) 20X1(Katar)
Link FPGA

Link FPGA
NP-3C NP-3C NP-3C NP-3C

Link FPGA
Link FPGA
XFP-XFI XFP-XFI
XFP-XFI XFP-XFI
XFP-XFI
XFP-XFI
XFP XFP SFP SFP
10GE 10GE 10x1 10x1
SFP SFP SFP SFP GE GE
XFP XFP XFP XFP
10x1 10x1G 10x1 10x1
10GE 10GE 10GE 10GE
GE E GE GE
7600-ES20 Hardware

sor
c es tem
o s rd
Pr bsy terca
Su augh 7 .5 card
a rl hter
Link D E aug
Daugh
tercard
D
a rd
o
seb
B a
ES+ Modules
Hardware and Software requirement

• Hardware requirement
– Supported by all the Cisco 7600 series routers:
– 7604, 7606, 7609, 7613 router (not in slot 1-8) and 7606-S, 7609-S.
– 7600-ES+xx will be supported by all SUP720 models except PFC3A
– 7600-ES+xx will be supported with RSP720
– 7600-ES+xx will not be supported by SUP2, SUP32
• Software Requirement
– Supported from version 12.2(33)SRD.
– Combo Cards are supported from version 12.2(33)SRE
– CatOS and Hybrid images are not supported.
Cisco 7600 TOI

Forwarding & Packet Flows


Classic Forwarding
Classic Linecard Classic Linecard
Port Port Port Port
ASIC ASIC ASIC ASIC
1

DBUS
RBUS
6

Switch Fabric

DBUS
SP RP
CPU CPU
RBUS
5
2 3L2 4 L3
Port Fabric
ASIC ASIC Engine Engine
PFC
Supervisor
Centralized Forwarding
Red
D

Supervisor Port Port


L3/4
Engine 720 ASIC ASIC
Engine
LCRBUS
LCDBUS
L2 Engine 720Gbps
Switch 5
Fabric
3 8Gbps CEF256
PFC3 Fabric Interface
Module B

DBUS
RBUS
2 Source S
Fabric 4 8Gbps CEF256 Destination D
Interface Module A Blue VLAN
Red VLAN
LCDBUS
LCRBUS
Entire Packet
Port Port Packet Header
ASIC ASIC
1
Blue S
Distributed Forwarding
Red
D

Port Port
CEF720
ASIC ASIC DFC3
Supervisor Engine 720 L3/4 Module B
Engine w/DFC3
720Gbps Fabric Interface/ 5
PFC3 Switch 20Gbps Replication Layer 2
Fabric Engine Engine
20Gbps

CEF720 Source S
4 Module A
Fabric Interface/ 2 Destination D
Layer 2 w/DFC3
Replication 3
Engine Blue VLAN
Engine Red VLAN
L3/4
Port Port Engine Entire Packet
DFC3
ASIC ASIC Packet Header
1
Blue S
The DBUS Header – What is it?
• Each packet that is sent to the PFC ASICs for a forwarding decision is pre-pended by 32
Byte DBus header which describes the packet to be switched and signaling from the
Line Cards.

Packet
Packet Header
Header DBus
DBus

Some of the relevant fields in the DBus Header Include:


Control Bits - e.g. Index Direct, Trusted, Don’t Learn…
Source VLAN - VLAN of the port packet arrived
Source Index - LTL or VC of the Source Port
CoS - Class of Service bits used for QoS
Bndl_Port - If Etherchannel, which bundle is this in
Dest Index - If Dest Index bit set, defines output port
(Used to override the switching decision)
Verify Forwarding Mode
engine#sho platform hardware cap
System Resources
PFC operating mode: PFC3BXL
Supervisor redundancy mode: administratively sso, operationally sso
Switching resources: Module Part number Series CEF mode
1 7600-SIP-200 CEF256 CEF
2 7600-SIP-400 CEF256 CEF
3 7600-SIP-400 CEF256 CEF
4 WS-X6704-10GE CEF720 dCEF
5 WS-SUP720-3BXL supervisor CEF
6 WS-SUP720-3BXL supervisor CEF
7 7600-SIP-200 CEF256 CEF
8 7600-SIP-200 CEF256 CEF
engine#show fabric switching-mode 9 WS-X6748-SFP CEF720 dCEF
Global switching mode is Compact
dCEF mode is not enforced for system to operate
Fabric module is not required for system to operate
Modules are allowed to operate in bus mode
Truncated mode is allowed, due to presence of DFC module

Module Slot Switching Mode


1 Crossbar
2 Crossbar
3 Crossbar
4 dCEF
5 dCEF
7 Crossbar
8 Crossbar
9 dCEF
engine#
Verifying System Mode
• 7600 selects a common mode that depends on the HW installed:
– RSP720-3C/CXL, DFC-3B/BXL, SIP600, WS-67xx cards with DFC3s

• Some examples

cylinder-rsp720#sh platform hardware pfc mode


PFC operating mode : PFC3C
. . .
cylinder-rsp720#

Non-DFC System Mixing w/ DFC-3B/XL Mixing C w. CXL Mixed but CXL only
RSP720-3C RSP720-3C RSP720-3CXL RSP720-3CXL
61xx 67xx w/ DFC3B 7600-ES20-GE3C 7600-ES20-GE3CXL

67xx SIP-600 (DFC3BXL) 7600-ES20-10G3CXL

SIP-200 any 67xx


SIP-400 any SIP-200
eFlexWAN any SIP-400

Mode = PFC3C Mode = PFC3B Mode = PFC3C Mode = PFC3CXL


Cisco 7600 Architecture
Understanding the different Card Types
Supervisor
Engine 720 MSFC3 Routing
Routing Table
Table

Hardware Fwd
PFC3BXL Tables
CEF720 Series FIB
dCEF720 Series
20

AFC3 or 20 20 Integrated
DFC3 Integrated DFC3 FIB
FIB 20 Switch Fabric 20

8
8
16 Gbps Switching Bus 8

Classic Series CEF256 Series dCEF256 Series


(FlexWAN) (OSM, Enhanced
Integrated
FlexWAN)
FIB FIB
DFC3 FIB
Cisco 7600 TOI

Control Plane Policing


Hardware Accelerated Features
Control Plane Policing (CoPP)
A new logical interface - the Control Plane Interface - has been introduced on the Sup720 to allow
a policer (service policy) to be applied on that interface thus limiting the TOTAL volume of traffic
destined to the control plane. This mechanism is used to protect the operational integrity of the
control plane…
MSFC
(Control Plane)

7600(config)# control-plane
7600(config-cp)# service policy input <name> Control Plane Interface

PFC
(Data Plane)

Linecard Linecard
Control Plane Protection
• Each PFC/DFC enforces hardware
CoPP policy independently
– Software CoPP becomes the only point CPU
of centralized enforcement
• CoPP support for multicast traffic is Software Control
in software only Plane Policing
– Use hardware rate limiter in conjunction
with CoPP software protection
• CoPP support for broadcast traffic is
in software only
– Use ACLs, storm control, ARP “mls qos
police” policing mechanisms, and ingress
QoS policers in conjunction with CoPP
software protection HW Control HW Control HW Control
Plane Policing Plane Policing Plane Policing
DFC3 DFC3 PFC3

Traffic Traffic Traffic


to CPU to CPU to CPU
Hardware Accelerated Features
CPU Rate Limiters
Ten special purpose hardware registers in the Sup720 are used to provide support for rate
limiting targeted unicast and multicast traffic destined to the Control Plane…
Rate Limiter Description Rate Limiter Description
CEF Receive Traffic destined to router Capture Optimized ACL Logging

CEF Glean ARP Packets IGMP IGMP Packets

CEF No Route Packet with no route in FIB Partial Shortcut Partial Shortcut entries

IP Errors IP Checksum or length error Directly Connected Local m/cast on connected I/f

ICMP Redirect Packets requiring redirecting IP Options Multicast packet with options

ICMP No Route Unreachable destination V6 Direct Connected No mroute in FIB

ICMP ACL Drop Unreachable for admin deny V6 *,G M Bridge Partial Shortcut entries

RPF Failure Packets failing RPF check V6 S,G M Bridge Partial Shortcut entries

L3 Security BCAC, IPSec and Auth-Proxy V6 Route Control Partial Shortcut entries

ACL Input NAT, TCP Intercept, ACL Log and V6 Default Route Multicast packet with options
Reflexive ACL’s
ACL Output V6 Second Drop Multicast packet with options
VACL Logging CLI notice of denied VACL

IP Options IP Options set in packet


Unicast Rate Limiter
Multicast Rate Limiter
Cisco Catalyst 6500 Control Plane Protection
CoPP and Hardware Rate Limiters
Special-Case Rate Limiters Override Hardware Control Plane
Policing

PFC3/DFC3
Special
Cases Hardware Special
Traffic Rate-Limiters Case
to CPU Traffic
Software
“Control- CPU
Plane”

Matches Hardware
Policy “Control-Plane”
Configuring CoPP
Four Required Steps:

1. Define ACLs
– Classify traffic
2. Define class-maps
– Setup class of traffic
3. Define policy-map
– Assign QOS policy action to class of traffic (police, drop)
4. Apply CoPP policy to control plane “interface”
Monitoring CoPP
• “show access-list” displays hit counts on a per ACL entry
(ACE) basis
– The presence of hits indicates flows for that data type to the control plane as
expected
– Large numbers of packets or an unusually rapid rate increase in packets processed
may be suspicious and should be investigated
– Lack of packets may also indicate unusual behavior or that a rule may need to be
rewritten
• “show policy-map control-plane” is invaluable for reviewing and tuning
site-specific policies and troubleshooting CoPP
– Displays dynamic information about number of packets (and bytes) conforming or
exceeding each policy definition
– Useful for ensuring that appropriate traffic types and rates are reaching the route
processor
• Use SNMP queries to automate the process of reviewing service-policy
transmit and drop rates
– The Cisco QoS MIB (CISCO-CLASS-BASED-QOS-MIB) provides the primary
mechanisms for MQC-based policy monitoring via SNMP
Control Plane Protection
CoPP Support
• A new logical interface—the Control Plane Interface—has been introduced. A policer (service
policy) can be applied on that interface thus limiting the total volume of traffic destined to
the control plane. This mechanism is used to protect the operational integrity of the control
plane

CPU
(Control Plane)
Switch(config)#control-plane
Switch(config-cp)#service policy input <name> Control Plane Interface

Forwarding Plane
(Data Plane)

CoPP is supported on the Cisco Cisco


7600 in Hardware

Linecard Linecard
Control Plane Protection
CoPP Deployment—Step 1
ip access-list extended coppacl-bgp
permit tcp host 192.168.1.1 host 10.1.1.1 eq bgp
permit tcp host 192.168.1.1 eq bgp host 10.1.1.1
• Step 1: Identify traffic of interest !
ip access-list extended coppacl-igp
and classify it into multiple traffic permit ospf any host 224.0.0.5
classes: permit ospf any host 224.0.0.6
permit ospf any any
– Routing (BGP,IGP (EIGRP, OSPF, ISIS) !
ip access-list extended coppacl-management
– Management (telnet, TACACS, ssh, permit tcp host 10.2.1.1 host 10.1.1.1 established
permit tcp 10.2.1.0 0.0.0.255 host 10.1.1.1 eq 22
SNMP, NTP) permit tcp 10.86.183.0 0.0.0.255 any eq telnet
permit udp host 10.2.2.2 host 10.1.1.1 eq snmp
– Reporting (SAA) & Monitoring permit udp host 10.2.2.3 host 10.1.1.1 eq ntp
!
(ICMP) ip access-list extended coppacl-reporting
permit icmp host 10.2.2.4 host 10.1.1.1 echo
– Critical applications and other !
traffic (HSRP, DHCP) ip access-list extended coppacl-monitoring
permit icmp any any ttl-exceeded
– Undesirable permit icmp any any port-unreachable
permit icmp any any echo-reply
– Default/Catch-All permit icmp any any echo
!
ip access-list extended coppacl-critical-app
permit ip any host 224.0.0.1
permit udp host 0.0.0.0 host 255.255.255.255 eq bootps
permit udp host 10.2.2.8 eq bootps any eq bootps
!
ip access-list extended coppacl-undesirable
permit udp any any eq 1434
Control Plane Protection
CoPP Deployment—Step 2
• Step 2: Associate the identified traffic with a class, and permit the traffic in each class
– Must enable QoS globally, else CoPP will not be applied in hardware
– Always apply a policing action for each class since the switch will ignore a class that does not have a corresponding policing
action (for example "police 31500000 conform-action transmit exceed-action drop"). Alternatively, both conform-action and
exceed-action could be set to transmit, but doing so will allocate a default policer as opposed to a dedicated policer with its
own hardware counters.
– HW CoPP classes are limited to one match per class-map

mls qos policy-map copp-policy


class copp-bgp
police 30000000 conform-action transmit exceed-action drop
class copp-igp
class-map match-all copp-bgp police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-bgp class copp-management
class-map match-all copp-igp police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-igp class copp-reporting
class-map match-all copp-management police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-management class copp-monitoring
class-map match-all copp-reporting police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-reporting class copp-critical-app
class-map match-all copp-monitoring police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-monitoring class copp-undesirable
class-map match-all copp-critical-app police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-critical-app class class-default
class-map match-all copp-undesirable police 30000000 conform-action transmit exceed-action drop
match access-group name coppacl-undesirable

control-plane
service-policy input copp-policy
Control Plane Protection
CoPP Deployment—Step 3
Switch# show policy-map control-plane
• Step 3: Adjust Control Plane Interface
Service-policy input: copp-policy
classification, and apply <snip>
liberal CoPP policies for Hardware Counters:
class-map: copp-monitoring (match-all)
each class of traffic Match: access-group name coppacl-monitoring
police :
– show policy-map control- 30000000 bps 937000 limit 937000 extended limit
plane displays dynamic Earl in slot 5 :
information for monitoring 0 bytes
control plane policy. Statistics 5 minute offered rate 0 bps
aggregate-forwarded 0 bytes action: transmit
include rate information and exceeded 0 bytes action: drop
number of packets/ bytes aggregate-forward 0 bps exceed 0 bps
confirmed or exceeding each Earl in slot 7 :
traffic class 112512 bytes
– CoPP rates on Sup720 are bps 5 minute offered rate 3056 bps
aggregate-forwarded 112512 bytes action: transmit
—pps is not possible. exceeded 0 bytes action: drop
However, HWRL rates are in aggregate-forward 90008 bps exceed 0 bps
pps
Software Counters:
Class-map: copp-monitoring (match-all)
1036 packets, 128464 bytes
5 minute offered rate 4000 bps, drop rate 0 bps
Match: access-group name coppacl-monitoring
police:
cir 30000000 bps, bc 937500 bytes
conformed 1036 packets, 128464 bytes; action: transmit
exceeded 0 packets, 0 bytes; action: drop
conformed 4000 bps, exceed 0 bps
<snip>
Control Plane Protection
CoPP Deployment—Step 3 (Cont.)
Switch#sh access-list
• Step 3: Adjust Classification, Extended IP access list coppacl-bgp
10 permit tcp host 192.168.1.1 host 10.1.1.1 eq bgp

and Apply liberal CoPP


20 permit tcp host 192.168.1.1 eq bgp host 10.1.1.1
Extended IP access list coppacl-critical-app
10 permit ip any host 224.0.0.1
policies for each class of 20 permit udp host 0.0.0.0 host 255.255.255.255 eq bootps
30 permit udp host 10.2.2.8 eq bootps any eq bootps
traffic Extended IP access list coppacl-igp
10 permit ospf any host 224.0.0.5 (64062 matches)
– show ip access-lists provides 20 permit ospf any host 224.0.0.6
30 permit ospf any any (17239 matches)
packet count statistics per Extended IP access list coppacl-management
10 permit tcp host 10.2.1.1 host 10.1.1.1 established
ACE. Absence of any hits on an 20 permit tcp 10.2.1.0 0.0.0.255 host 10.1.1.1 eq 22
30 permit tcp 10.86.183.0 0.0.0.255 any eq telnet
entry indicate lack of traffic 40 permit udp host 10.2.2.2 host 10.1.1.1 eq snmp
50 permit udp host 10.2.2.3 host 10.1.1.1 eq ntp
matching the ACE criteria— Extended IP access list coppacl-monitoring
the rule might be rewritten 10 permit icmp any any ttl-exceeded (120 matches)
20 permit icmp any any port-unreachable
– Hardware ACL hit counters are 30 permit icmp any any echo-reply (17273 matches)
40 permit icmp any any echo (5 matches)
available in PFC3B/BXL for Extended IP access list coppacl-reporting
10 permit icmp host 10.2.2.4 host 10.1.1.1 echo
security ACL TCAM only (not Extended IP access list coppacl-undesirable
10 permit udp any any eq 1434
QoS ACL TCAM)
Control Plane Protection
• Step 4: Fine tune theCoPP Deployment—Step
control plane policy 4
– Narrow the ACL permit statements to only allow known authorized source addresses and
depending on class defined, apply appropriate policy
• Routing protocol traffic—no rate limit or very conservative rate limit
• Management traffic—conservative rate limit
• Reporting traffic—conservative rate limit
• Monitoring traffic—conservative rate limit
• Critical traffic—
conservative
rate limit policy-map copp-policy
class coppclass-bgp
• Default traffic— police 15000000 conform-action transmit exceed-action drop
low rate limit class coppclass-igp
• Undesirable police 15000000 conform-action transmit exceed-action drop
class coppclass-management
traffic—drop police 2560000 conform-action transmit exceed-action drop
class coppclass-reporting
police 1000000 conform-action transmit exceed-action drop
class coppclass-monitoring
police 1000000 conform-action transmit exceed-action drop
class coppclass-critical-app
police 7500000 conform-action transmit exceed-action drop
class coppclass-undesirable
police 32000 conform-action transmit exceed-action drop
class class-default
police 1000000 conform-action transmit exceed-action drop
l
Sample ACL’s
• Routing – ACL 120
• ! -- ACL for CoPP Routing class-map
• access-list 120 permit tcp any gt 1024 <router receive block> eq bgp
• access-list 120 permit tcp any eq bgp <router receive block> gt 1024 established
• access-list 120 permit tcp any gt 1024 <router receive block> eq 639
• access-list 120 permit tcp any eq 639 <router receive block> gt 1024 established
• access-list 120 permit tcp any <router receive block> eq 646
• access-list 120 permit udp any <router receive block> eq 646
• access-list 120 permit ospf any <router receive block>
• access-list 120 permit ospf any host 224.0.0.5
• access-list 120 permit ospf any host 224.0.0.6
• access-list 120 permit eigrp any <router receive block>
• access-list 120 permit eigrp any host 224.0.0.10
• access-list 120 permit udp any any eq pim-auto-rp
• ---etc--- for other routing protocol traffic...
• Management – ACL 121
• ! -- ACL for CoPP Management class
• access-list 121 permit tcp <NOC block> <router receive block> eq telnet
• access-list 121 permit tcp <NOC block> eq telnet <router receive block> established
• access-list 121 permit tcp <NOC block> <router receive block> eq 22
• access-list 121 permit tcp <NOC block> eq 22 <router receive block> established
• access-list 121 permit udp <NOC block> <router receive block> eq snmp
• access-list 121 permit tcp <NOC block> <router receive block> eq www
• access-list 121 permit udp <NOC block> <router receive block> eq 443
• access-list 121 permit tcp <NOC block> <router receive block> eq ftp
• access-list 121 permit tcp <NOC block> <router receive block> eq ftp-data
• access-list 121 permit udp <NOC block> <router receive block> eq syslog
• access-list 121 permit udp <DNS block> eq domain <router receive block>
• access-list 121 permit udp <NTP block> <router receive block> eq ntp
• ---etc--- for known good management traffic...

• Catch-All IP – ACL 124


• ! -- ACL for CoPP Catch-All class-map
• access-list 124 permit tcp any any
• access-list 124 permit udp any any
• access-list 124 permit icmp any any
• access-list 124 permit ip any any
• Normal – ACL 122
• !-- ACL for CoPP Normal class-map
• access-list 122 permit icmp any <router receive block> echo
• access-list 122 permit icmp any <router receive block> echo-reply
• access-list 122 permit icmp any <router receive block> ttl-exceeded
• access-list 122 permit icmp any <router receive block> packet-too-big
• access-list 122 permit icmp any <router receive block> port-unreachable
• access-list 122 permit icmp any <router receive block> unreachable
• access-list 122 permit pim any any
• access-list 122 permit igmp any any
• access-list 122 permit gre any any
• ---etc--- for other known good traffic...

• Undesirable – ACL 123


• ! -- ACL for CoPP Undesirable class-map
• access-list 123 permit icmp any any fragments
• access-list 123 permit udp any any fragments
• access-list 123 permit tcp any any fragments
• access-list 123 permit ip any any fragments
• access-list 123 permit udp any any eq 1434
• access-list 123 permit tcp any any eq 639 rst
• access-list 123 permit tcp any any eq bgp rst
• --- etc. all other known bad things here–
Control Plane Protection
CoPP Deployment Considerations - Summary
Top deployment considerations:
• No HW CoPP processing unless “mls qos” is enabled: this enables also port-level
QoS mechanisms
• HW CoPP will ignore a class that does not have a corresponding policing action
• HW CoPP decisions are per forwarding engines
– SW CoPP for the aggregate traffic
• HW CoPP does not support IP/ARP broadcast/multicast traffic
– Use multicast HWRL/Dynamic ARP Inspection or “mls qos protocol arp”/Storm Control in conjunction
– Remember, software CoPP will still match multicast and broadcast traffic, so you MUST classify these
packets in CoPP policies

http://www.cisco.com/en/US/products/hw/switches/ps708/products_white_paper0900aecd802ca5d6.shtml
Control Plane Protection
CoPP Deployment Considerations - Summary

Other considerations:
• HW CoPP processing only for packets where HW FIB or HW ingress ACL determines
punting. HW egress ACL punts do not pass through CoPP.
• HW CoPP classes can only match what IP ACLs can handle in hardware
• HW CoPP supports only IPv4 and IPv6 unicast traffic
– No support for ARP ACLs, MAC ACLs…
• CoPP rates on Sup720 are bps—pps is not possible. However, HWRL rates are in pps
• CoPP is supported in ingress only
• Not supported today:
– SNMP support for CoPP
– ACL Log keyword support for CoPP
Additional Info and References
CoPP Best Practices
• http://www.cisco.com/web/about/security/intelligence/coppwp_gs.html

Protecting Cisco Catalyst 6500 Series Switches Using Control Plane Policing, Hardware
Rate Limiting, and Access-Control Lists
• http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11_553261.html

Configuring Denial of Service Protection


• http://www.cisco.com/en/US/docs/routers/7600/ios/12.2SR/configuration/guide/dos.html
Cisco
Cisco 7600 TOI

What logs to be captured ?


Troubleshooting Flow
Start with
On RP
Log session to desktop
Determine Problem Type :
Conf t - Cpu
Service timestamp [debug|log] datetime - Packet loss
msec - unicast forwarding
Service internal - multicast forwarding
no logging console - qos
- performance
Line [con 0| vty 0 – 4] - MPLS forwarding
Exec prompt timestamp
end --> Move to Next slides

terminal length 0
show log
show clock
show tech
Show tech platform
High cpu - Architecture – performance
Sh proc cpu sorted Sh module
Sh ibc Sh fabric switching-mode
Sh msfc netint Sh run
Sh interface Sh ver
Sh ip traffic Sh platform hardware capacity
Debug netdr capture Sh mls statistics
Show netdr capture Sh tech-support platform
Sh tech-support platform earl

IPv4 Unicast
Sh platform tech unicast <dest-ip>
<mask>
Sh mls cef summ
Sh ip cef summ
Sh tech-support cef
Sh tech-support ipc

IPv4 Multicast -
Sh platform tech ipmulticast <group> <source>
Sh tech ipmulticast
Cisco 7600 TOI

GOLD DIAGNOSTIC TEST & SCH OVERVIEW


Generic Online Diagnostics
How Does GOLD Work?
• Diagnostic packet switching
tests verify that the system
is operating correctly: Forwarding LineCard
•Is the supervisor control plane and Engine
forwarding plane functioning properly?
•Is the standby supervisor ready to
take over?
Forwarding Fabric
•Are line cards forwarding packets Active
properly?
Engine CPU
Supervisor
•Are all ports working?
•Is the backplane connection working? Standby
Supervisor
• Other types of diagnostics tests
including memory and error LineCard
correlation tests are also available
Generic Online Diagnostics
What Type of Failure Does GOLD Detect?
• Diagnostics capabilities
built in hardware
• Depending on hardware,
GOLD can catch:
–Port failure
–Bent backplane connector
–Bad fabric connection
–Malfunctioning forwarding engines
–Stuck control plane
–Bad memory
Generic Online Diagnostics
Diagnostic Operation
Boot-Up Diagnostics Run During System Bootup, Line Card OIR
Or Supervisor Switchover
Switch(config)# diagnostic bootup level complete Makes Sure Faulty Hardware Is Taken out
of Service
Runtime Diagnostics

Health-Monitoring

Switch(config)# diagnostic monitor module 5 test 2


Non-Disruptive Tests Run in
Switch(config)# diagnostic monitor interval module 5 test 2 00:00:15 the Background
Serves As HA Trigger
On-Demand

Switch# diagnostic start module 4 test 8


Module 4: Running test(s) 8 may disrupt normal system
operation
Do you want to continue? [no]: y All diagnostics tests can be run on
Switch# diagnostic stop module 4
demand, for troubleshooting purposes.
It can also be used as a pre-deployment
Scheduled
tool.
Switch(config)# diagnostic schedule module 4 test 1
port 3 on Jan 3 2005 23:32
Switch(config)# diagnostic schedule module 4 test 2 Schedule Diagnostics Tests, for
daily 14:45
Verification and Troubleshooting
Purposes
Generic Online Diagnostics
An Example: Supervisor Datapath Coverage
• Monitors forwarding path
MSFC
between the switch PFC3
RP CPU
processor, route processor L3/4 Port ASIC
and forwarding engine Engine
SP CPU
• Runs periodically every
15 seconds after system L2 Engine Fabric Switch Fabric
Interface/
is online (configurable) Replication
• Ten consecutive failures is Engine

treated as fatal and will


result in supervisor
switchover or DBUS
RBUS
16 Gbps EOBC
supervisor reset
Bus

Switch(config)# diagnostic monitor module 5 test 2


Switch(config)# diagnostic monitor interval module 5 test 2 00:00:15
Generic Online Diagnostics
Using Diagnostics as a Pre-Deployment Tool
The Order in Which Tests Are Run Matters
• Run diagnostics first on line cards, then on supervisors
• Run packet switching tests first, run memory tests after
Switch# diagnostic start module 6 test all
Module 6: Running test(s) 8 will require resetting the line card after the test has completed
Module 6: Running test(s) 1-2,5-9 may disrupt normal system operation
Do you want to continue? [no]: yes
*Mar 25 22:43:16: %DIAG-SP-6-TEST_RUNNING: Module 6: Running TestTransceiverIntegrity{ID=1} ...
*Mar 25 22:43:16: %DIAG-SP-3-TEST_SKIPPED: Module 6: TestTransceiverIntegrity{ID=1} is skipped
*Mar 25 22:43:16: %LINK-5-CHANGED: Interface GigabitEthernet6/1, changed state to administratively down
Mar 25 22:43:16: %DIAG-SP-6-TEST_RUNNING: Module 6: Running TestLoopback{ID=2} ...
*
*Mar 25 22:43:16: %DIAG-SP-6-TEST_RUNNING: Module 6: Running TestAsicMemory{ID=8} ...
*Mar 25 22:43:16: SP: ******************************************************************
*Mar 25 22:43:16: SP: * WARNING:
*Mar 25 22:43:16: SP: * ASIC Memory test on module 6 may take up to 2hr 30min.
*Mar 25 22:43:16: SP: * During this time, please DO NOT perform any packet switching.
*Mar 25 22:43:16: SP: ******************************************************************
<snip>
Switch# diagnostic start module 5 test all
Module 5: Running test(s) 27-30 will power-down line cards and standby supervisor should be power-down
manually and supervisor should be reset after the test
Module 5: Running test(s) 26 will shut down the ports of all linecards and supervisor should be reset
after the test
Module 5: Running test(s) 3,5,8-10,19,22-23,26-31 may disrupt normal system operation
Do you want to continue? [no]: yes
<snip>
Generic Online diagnostics
- GOLD Operation Example
7600# show diagnostic content mod 5
Module 5: Supervisor Engine 720 (Active)
<snip>
Testing Interval
ID Test Name Attributes (day hh:mm:ss.ms)
==== ================================== =================
1) TestScratchRegister -------------> ***N****A*** 000 00:00:30.00
2) TestSPRPInbandPing --------------> ***N****A*** 000 00:00:15.00
3) TestTransceiverIntegrity --------> **PD****I*** not configured
4) TestActiveToStandbyLoopback -----> M*PDS***I*** not configured
5) TestLoopback --------------------> M*PD****I*** not configured
6) TestNewIndexLearn ---------------> M**N****I*** not configured
7) TestDontConditionalLearn --------> M**N****I*** not configured
Diagnostics test suite attributes:
8) TestBadBpduTrap -----------------> M**D****I*** not configured
M/C/* - Minimal bootup level test / Complete bootup
9) TestMatchCapture ----------------> M**D****I***levelnot configured
test / NA
10) TestProtocolMatchChannel --------> M**D****I*** not configured
B/* - Basic ondemand test / NA
11) TestFibDevices ------------------> M**N****I*** P/V/*
not-configured
Per port test / Per device test / NA
12) TestIPv4FibShortcut -------------> M**N****I*** D/N/*
not-configured
Disruptive test / Non-disruptive test / NA
13) TestL3Capture2 ------------------> M**N****I*** not-configured
S/* Only applicable to standby unit / NA
14) TestIPv6FibShortcut -------------> M**N****I*** not configured
X/* - Not a health monitoring test / NA
15) TestMPLSFibShortcut -------------> M**N****I*** not-configured
F/* Fixed monitoring interval test / NA
16) TestNATFibShortcut --------------> M**N****I*** not configured
E/* - Always enabled monitoring test / NA
17) TestAclPermit -------------------> M**N****I*** not-configured
A/I Monitoring is active / Monitoring is
18) TestAclDeny ---------------------> M**N****A***inactive
000 00:00:05.00
19) TestQoSTcam ---------------------> M**D****I*** R/*
not-configured
Power-down line cards and need reset
supervisor / NA
<snip>
K/* - Require resetting the line card after the
test has completed / NA
T/* - Shut down all ports and need reset
supervisor / NA
Generic Online diagnostics
Catalyst GOLD Operation Example (Cont.)
20) TestL3VlanMet -------------------> M**N****I*** not configured n/a
21) TestIngressSpan -----------------> M**N****I*** not configured n/a
22) TestEgressSpan ------------------> M**D****I*** not configured n/a
23) TestNetflowInlineRewrite --------> C*PD****I*** not configured n/a
24) TestFabricSnakeForward ----------> M**N****I*** not configured n/a
25) TestFabricSnakeBackward ---------> M**N****I*** not configured n/a
26) TestTrafficStress ---------------> ***D****I**T not configured n/a
27) TestFibTcamSSRAM ----------------> ***D*X**IR** not configured n/a
28) TestAsicMemory ------------------> ***D*X**IR** not configured n/a
29) TestNetflowTcam -----------------> ***D*X**IR** not configured n/a
30) ScheduleSwitchover --------------> ***D****I*** not configured n/a
31) TestFirmwareDiagStatus ----------> M**N****I*** not configured n/a
32) TestAsicSync --------------------> ***N****A*** 000 00:00:15.00 10

Diagnostics test suite attributes:


M/C/* - Minimal bootup level test / Complete bootup
level test / NA
B/* - Basic ondemand test / NA
P/V/* - Per port test / Per device test / NA
D/N/* - Disruptive test / Non-disruptive test / NA
S/* - Only applicable to standby unit / NA
X/* - Not a health monitoring test / NA
Pay extra attention to Memory tests: F/* - Fixed monitoring interval test / NA
E/* - Always enabled monitoring test / NA
Memory tests can take hours to complete and A/I - Monitoring is active / Monitoring is
inactive
a reset is required after running these tests! R/* - Power-down line cards and need reset
supervisor / NA
K/* - Require resetting the line card after the
test has completed / NA
T/* - Shut down all ports and need reset
supervisor / NA
Generic Online Diagnostics
Catalyst GOLD Operation Example
7600# show diagnostic result mod 7
Current bootup diagnostic level: complete
Module 7: CEF720 24 port 1000mb SFP

Overall Diagnostic Result for Module 7 : MINOR ERROR


Diagnostic level at card bootup: complete

Test results: (. = Pass, F = Fail, U = Untested)

1) TestTransceiverIntegrity:

Port 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
----------------------------------------------------------------------------
U U . U . . U U . . U U . . U U U U U U U U U U

Test results: (. = Pass, F = Fail, U


2) TestLoopback:
= Untested)
Port 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
----------------------------------------------------------------------------
. . . . . . . . . . . . F . . . . . . . . . . .

3) TestScratchRegister -------------> .
4) TestSynchedFabChannel -----------> .
<snip>
Generic Online Diagnostics
Recommendations
• Bootup diagnostics:
–Set level to complete
• On demand diagnostics:
–Use as a pre-deployment tool: run complete diagnostics
before putting hardware into production environment
–Use as a troubleshooting tool when suspecting
hardware failure
Si
• Scheduled diagnostics:
–Schedule key diagnostics tests periodically
–Schedule all non-disruptive tests periodically
• Health-monitoring diagnostics:
–Key tests running by default
–Enable additional non-disruptive tests for specific functionalities enabled in your network: IPv6,
MPLS, NAT
Q AND A

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy