100% found this document useful (1 vote)
192 views137 pages

Data Center 1,2

Uploaded by

Hay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
192 views137 pages

Data Center 1,2

Uploaded by

Hay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

Data center -1

Data Center Physical Infrastructure


Data Center Protocols
What is a Data Center
• A data center is a physical facility that organizations use to
house their critical applications and data.
• A data center's design is based on a network of computing
and storage resources that enable the delivery of shared
applications and data.
What is a Data Center

• The data center is home to the


computational power, storage, and
applications necessary to support an
enterprise business.
• The data center infrastructure is central
to the IT architecture, from which all
content is sourced or passes through.
• Proper planning of the data center
infrastructure design is critical, and
performance, resiliency, and scalability
need to be carefully considered.
What is a Data Center

• Another important aspect of the data


center design is flexibility in quickly
deploying and supporting new services.
• The data center network design is
based on a proven layered approach.
• layered approach is the basic
foundation of the data center
design that seeks to improve
scalability, performance, flexibility,
resiliency, and maintenance
• Modern data centers are very different than they
were just a short time ago.
• Infrastructure has shifted from traditional
What defines on-premises physical servers to
virtualized infrastructure.

a modern data • In this era, the modern data center is wherever its
data and applications are.

center? • It stretches across multiple public and


private clouds to the edge of the network
via mobile devices and embedded
computing
• The data center must reflect the
intentions of users and applications.
Why are data centers important to business?

Customer relationship
Productivity management (CRM)
Email and file sharing.
applications. and enterprise resource
planning (ERP).

Big data, artificial


Communications and
intelligence, and
collaboration services
machine learning.
What Are The Core Components Of A Data Center?

• Network infrastructure: This connects servers (physical and virtualized), data


center services, storage, and external connectivity to end-user locations.
• Storage infrastructure: Data is the fuel of the modern data center. Storage
systems are used to hold this valuable commodity.
• Computing resources: Applications are the engines of a data center. These
servers provide the processing, memory, local storage, and network
connectivity that drive applications.
What Is In A Data Center Facility?
What Are The Standards For Data Center Infrastructure?

There are four categories of data ❑ Tier 2: Redundant-capacity


center tiers rated for levels of component site infrastructure.
redundancy and fault tolerance. • offers improved protection against
physical events.
❑ Tier 1: Basic site infrastructure:
• It has redundant-capacity
• offers limited protection against
components and a single,
physical events.
• nonredundant distribution path.
• It has single-capacity components
and a single
• nonredundant distribution path.
What Are The Standards For Data Center
Infrastructure?
❑ Tier 3: Concurrently ❑ Tier 4: Fault-tolerant site
maintainable site infrastructure. infrastructure.
• protects against virtually all • provides the highest levels of fault
physical events tolerance and redundancy.
• providing redundant- capacity • Redundant-capacity components
components and multiple and multiple independent
independent distribution paths. distribution paths
• Each component can be removed • enable concurrent maintainability
or replaced without disrupting and one fault anywhere in the
services to end users. installation without causing
downtime
Types Of Data Centers
• Enterprise data centers • Colocation data centers
✓ These are built, owned, and operated by ✓a company rents space within a
companies data center owned by others and
✓ optimized for their end users. located off company premises.
✓ Most often they are housed on the ✓The colocation data center hosts
corporate campus. the infrastructure--building,
• Managed services data centers cooling, bandwidth, security.
✓managed by a third party ✓the company provides and
✓The company leases the manages the components,
equipment and infrastructure including servers, storage, and
instead of buying it. firewalls.
Types Of Data Centers
• Cloud data centers
✓off-premises form of data center,
✓data and applications are hosted by a cloud services provider
such as Amazon Web Services (AWS), Microsoft (Azure), or
IBM Cloud.
Basic Layered Design
▪ The layers of the data center design are the
core, aggregation, and access layers.

▪ Core layer: Provides the high-speed packet


switching backplane for all flows going in
and out of the data center.
▪ provides connectivity to multiple
aggregation modules.
▪ provides a resilient Layer 3 routed
fabric with no single point of failure.
▪ runs an interior routing protocol, such
as OSPF or EIGRP, and load balances
traffic between the campus core and
aggregation.
Basic Layered Design
▪ Aggregation layer: provide important functions,
such as service module integration, Layer 2 domain
definitions, spanning tree processing, and default
gateway redundancy.
▪ provide services, such as:
▪ content switching, firewall, SSL offload,
intrusion detection, network analysis,
and more
▪ Access layer: where the servers physically attach
to the network.
▪ Switches provide both Layer 2 and Layer 3
topologies, fulfilling the various server
broadcast domain or administrative
requirements.
Data Center Design Models

▪ Single tier: data centers operate with non-


redundant capacity components and a single, non-
redundant distribution path.
▪ no default backup system in place
▪ Inexpensive.
▪ usually suit small businesses that have
minimal data hosting needs.
Data Center Design Models

▪ Multi-Tier Model: is the most common model used in the


enterprise today. Data centers operate with redundant
capacity components and redundant distribution path.
▪ Scalable: just add more edge switches when
required.
▪ Resilient design: multiple paths.
▪ Efficient and well-structured design: More user
ports; fewer ISL ports.
CABLING FOR DATA CENTERS
• AC/DC power, ground, copper and fiber optic are the main types of
network cabling used in data centers.
• The interface that is available on the equipment used in the data
center is the primary means for determining which type of cabling
should be used.
• The network data cabling may also be selected based upon the
bandwidth requirements of the equipment being used in the data
center.
• Cabling within a data center may be either:
✓ structured
✓ unstructured.
CABLING FOR DATA CENTERS
❖ Structured Cabling:
✓ uses predefined standards based design
with predefined connection points and
pathways.
✓ specified by the bandwidth requirements of
the system & tested to ensure proper
performance.
✓ The cables will be well organized and
labeled.
✓ may take longer to install and have a higher
initial cost.
✓ operational cost will ultimately be lower and
the life cycle of the system will be longer.
CABLING FOR DATA CENTERS
❖ Unstructured Cabling:
✓ do not use predefined standards,
connection points, or pathways.
✓ Lead to cooling issues because the
airflow is typically restricted
✓ Lead to higher energy cost.
✓ managing system becomes difficult (no
plan to change cable locations or run
new cabling)
✓ may result in extended down time
✓ less time to install and have a lower
initial cost
✓ operational cost will be higher and the
life cycle will be shorter
Overview of Data Center Cabling Types

CO M M U N IC AT I ON
A P P L I C AT I ON CABLE TYPE CO N N EC TO R T Y P E
S TA N DA R D

10/100Mbps (100Base-TX) Ethernet Cat 5e, Cat 6, Cat 6a, Cat7, Cat 7a RJ45

1000Mbps (Gigabit or 1000Base-T) Gigabit Ethernet Cat 5e, Cat 6, Cat 6a, Cat7, Cat 7a RJ45

10Gbps (10GBase-T) 10Gig Ethernet Cat 6a, Cat7, Cat7a RJ45, GG45, TERA

40 or 100Gbps 40 or 100Gig Ethernet Cat7a GG45, TERA

InfiniBand, QSFP, SFP+, 10G - CX4, LC, SC,


Fiber Channel High Speed Ethernet Twinaxial or Fiber
ST

Multimode (High-bandwidth, Short


Fiber Optic High-Speed Ethernet Distance) LC, SC, ST, FDDI, MTP, MTRJ, FC, etc.
Single Mode (High-speed, Long Distance)
Overview of Data Center Cabling Types

Connector Renderings
Questions To Ask Your Data Center Cable
Supplier
Asking these questions will allow you to have a meaningful
conversation with your provider about cable products and directions:
• Cable types: On what basis are you recommending various cable
solutions to us? What are your recommendations about choices
between passive copper, active copper and active optical cabling?
• Power consumption and cooling: How can we minimize the cooling
obstacles caused by cabling?
• Electromagnetic interference: What steps do you take to control
EMI from your copper cable products?
• Cable manufacturers: Can you tell us who manufactures the cables
you recommend to us?
Data Center Protocols
Spanning Tree Protocol
Data Center Protocols
Spanning Tree Protocol (STP)

• Spanning Tree Protocol (STP) is a Layer 2 link management protocol that


provides path redundancy while preventing loops in the network.
• For a Layer 2 Ethernet network to function properly, only one active path can
exist between any two stations.
• Multiple active paths among end stations cause loops in the network.
• End stations might receive duplicate messages.
• Switches might also learn end-station MAC addresses on multiple Layer
2 interfaces. These conditions result in an unstable network.
Data Center Protocols
Spanning Tree Protocol (STP)

• The STP uses a spanning-tree algorithm to select one switch of


a redundantly connected network as the root of the spanning
tree.
• The algorithm calculates the best loop-free path through a switched
Layer 2 network by assigning a role to each port based on the role of
the port in the active topology:
• Root—A forwarding port elected for the spanning-tree topology
• Designated—A forwarding port elected for every switched LAN segment
• Alternate—A blocked port providing an alternate path to the root bridge in the
spanning tree
• The switch that has all of its ports as the designated role or as the backup role is the root
switch. The switch that has at least one of its ports in the designated role is called the
designated switch.
Spanning Tree
Redundancy at OSI Layers 1 and 2
• Switched networks commonly have redundant paths and
even redundant links between the same two devices.
• Redundant paths eliminate a single point of failure in order
to improve reliability and availability.
• Redundant paths can cause physical and logical Layer 2
loops.
• Spanning Tree Protocol (STP) is a Layer 2 protocol that
helps especially when there are redundant links.
• Layer 2 loop issues
• Mac database instability – copies of the same frame being received on different ports.
• Broadcast storms – broadcasts are flooded endlessly causing network disruption.
• Multiple frame transmission – multiple copies of unicast frames delivered to the same destination.
STP Operation
Spanning Tree Algorithm: Port Roles
• Root bridge – one Layer 2 device in a switched network.
• Root port – one port on a switch that has the lowest
cost to reach the root bridge.
• Designated port – selected on a per-segment (each
link) basis, based on the cost to get back to root bridge
for either side of the link.
• Alternate port – (RSTP only) backup port for the designated port when
the other side is not a root port.
• Backup port – (RSTP only) backup port for the root port.
Supports per-VLAN
STP STP operations

Spanning Tree Algorithm: Root Bridge


• Lowest bridge ID (BID) becomes root bridge
• Originally BID had two fields: bridge priority and
MAC address
• Bridge priority default is 32,768 (can change)
• Lowest MAC address (if bridge priority is not
changed) becomes determinant for root bridge.
STP
Spanning Tree Algorithm:
Root Path Cost
• Root path cost is used to determine the role of the port
and whether or not traffic is blocked.
• Can be modified with the spanning-tree cost
interface command.
STP Operation
Port Role Decisions for RSTP
• S1 is root
bridge
STP Operation
Port Role Decisions for RSTP (Cont.)

Which switch (S3 or S2) has


the lowest BID?

• After S3 and S2 exchange BPDUs, STP determines that the F0/2 port on S2
becomes the designated port and the S3 F0/2 port becomes the alternate port,
thus going into the blocking state so there is only one path through the switched
network.
STP Operation
Determine Designated and Alternate Ports

Remember port states are based on path cost back to root


bridge.
STP Operation
Field Description

802.1D BPDU Frame Protocol ID Type of protocol being used; set to 0

Format Version
Message type
Protocol version; set to 0
Type of message; set to 0
Flags Topology change (TC) bit signals a topology a change;
topology change acknowldgment (TCA) bit used when a
configuration message with the TC bit set has been
received

Root ID Root bridge information


Root path cost Cost of the path from the switch sending the
configuration message to the root bridge

Bridge ID Includes priority, extended system ID, and MAC address


ID of the bridge sending the message

Port ID Port number from which the BPDU was sent


Message age Amount of time since the root bridge sent the
configuration message

Max age When the current configuration message will be deleted

Hello time Time between root bridge messages


Forward delay Time the bridges should wait before going to a new
state
STP Operation
802.1D BPDU Propagation and
Process
1. When a switch is powered on, it assumes it is the root
bridge until BPDUs are sent and STP calculations are
performed. S2 sends out BPDUs.
2. S3 compares its root ID with the BPDU from S2. S2 is
lower so S3 updates its root ID.
STP Operation
802.1D BPDU Propagation and
Process (Cont.)
3. S1 receives the same information from S2 and
because S1 has a lower BID, it ignores the information
from S2.
4. S3 sends BPDUs out all ports indicating that S2 is root
bridge.
STP Operation
802.1D BPDU Propagation and
Process (Cont.)
5. S2 compares the info from S3 so S2 still thinks it is
root bridge.
6. S1 gets the same information from S3 (that S2 is root
bridge), but because S1 has a lower BID, the switch
ignores the information in the BPDU.
STP Operation
802.1D BPDU Propagation and
Process (Cont.)
7. S1 now sends out BPDUs out all ports. The
BPDU contains information designated S1 as
root bridge.
STP Operation
802.1D BPDU Propagation and
Process (Cont.)
8. S3 compares the info from S1 so S3 now sees that the BID from S1 is lower
than its stored root bridge information which is currently showing that S2 is
root bridge. S3 changes the root ID to the information received from S1.
9. S2 compares the info from S1 so S2 now sees the BID from S1 is lower than its
own BID. S2 now updates its own information showing S1 as root bridge.

Remember that after root bridge has been determined, the


other port roles can be determined because those roles are
determined by total path cost back to root bridge.
STP Operation Remember - lowest
Extended System ID BID becomes root

• If priorities are all set to the default, lowest MAC


address is the determining factor in lowest BID.
• The priority value can be modified to influence
root bridge elections.
3.2 Types of Spanning Tree Protocols

© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 40
Varieties of Spanning Tree Protocols
Types of Spanning Tree Protocols
STP Type Description
802.1D 1998 - Original STP standard
CST One spanning-tree instance
PVST+ Cisco update to 802.1D; each VLAN has its own
spanning-tree instance
802.1D 2004 – Updated bridging and STP standard

802.1w (RSTP) Improves convergence by adding new roles to ports and


enhancing BPDU exchange
Rapid PVST+ Cisco enhancement of RSTP using PVST+
802.1s (MSTP) Multiple VLANs can have the same spanning-tree
instance
Varieties of Spanning Tree Protocols
Characteristics of Spanning Tree Protocols
STP Type Standard Resources Needed Convergence Tree Calculation

STP 802.1D Low Slow All VLANs


PVST+ Cisco High Slow Per VLAN
RSTP 802.1w Medium Fast All VLANs
Rapid PVST+ Cisco Very high Fast Per VLAN

MSTP 802.1s Medium or high Fast Per instance


Varieties of Spanning Tree Protocols
Overview of PVST+
• Original 802.1D defines a common
spanning tree
• One spanning tree instance for the
switched network (no matter how many
VLANs)
• No load sharing
• One uplink must block for all VLANs
• Low CPU utilization because only one
instance of STP is used/calculated
• Cisco PVST+ - each VLAN has its own
spanning tree instance
• One port can be blocking for one VLAN
and forwarding for another VLAN
• Can load balance
• Can stress the CPU if a large number of
VLANs are used
Varieties of Spanning Tree Protocols
Port States and PVST+ Operation
Port State
Operation allowed Blocking Listening Learning Forwarding Disabled

Can receive/process Yes Yes Yes Yes No


BPDUs
Can forward data frames No No No Yes No
received on an interface
Can forward data frames No No No Yes No
switched from another
interface
Can learn MAC addresses No No Yes Yes No
Varieties of Spanning Tree Protocols
Remember that the BID is
Extended System ID and PVST+ Operation a unique ID

• The extended system ID field ensures each


switch has a unique BID for each VLAN.
• The VLAN number is added to the priority
value.
• Example – VLAN 2 priority is 32770 (default
value of 32768 plus the VLAN number of 2
equals 32770)
• Can modify the priority number to influence
the root bridge decision process
• Reasons to select a particular switch as root
bridge
• Switch is positioned such that most traffic
patterns flow toward this particular switch
• Switch has more processing power (better CPU)
• Switch is easier to access and manage remotely
Varieties of Spanning Tree Protocols
Overview of Rapid PVST+
• Rapid PVST+ speeds up STP recalculations
and converges quicker
• Cisco version of RSTP
• Two new port types
• Alternate port (DIS)
• Backup port
• Independent instance of RSTP runs for each
VLAN
• Cisco features such as UplinkFast and
BackboneFast are not compatible with
switches that run RSTP
Varieties of Spanning Tree Protocols
RSTP BPDUs
• RSTP uses type 2, version 2 BPDUs
• Original version was type 0, version 0
• A switch using RSTP can work with and
communicate with a switch running the
original 802.1D version
• BPDUs are used as a keepalive mechanism
• 3 missed BPDUs indicates lost
connectivity
Varieties of Spanning Tree Protocols
Edge Ports
• Has an end device connected – NEVER another switch
• Immediately goes to the forwarding state
• Functions similar to a port configured with Cisco PortFast
• Use the spanning-tree portfast command
Varieties of Spanning Tree Protocols
Link Types
• Point-to-Point – a port in full-duplex mode connecting from one switch to
another switch or from a device to a switch
• Shared – a port in half-duplex mode connecting a hub to a switch

Point-to-Point
Shared
PVST+ Configuration
Catalyst 2960 Default Configuration
Feature Default Setting
Enable state Enabled on VLAN 1
Spanning-tree mode PVST+ (Rapid PVST+ and MSTP are disabled)
Switch priority 32768
Spanning-tree port priority (configurable on a per-interface basis) 128

Spanning-tree port cost (configurable on a per-interface basis) 1000 Mb/s: 4. 100 Mb/s: 19 10 Mb/s: 100

Spanning-tree VLAN port priority (configurable on a per-VLAN basis) 128

Spanning-tree VLAN port cost (configurable on a per-VLAN basis) 1000 Mb/s: 4


100 Mb/s: 19
10 Mb/s: 100
Spanning-tree timers Hello time: 2 seconds
Forward-delay time: 15 seconds
Maximum-aging time: 20 seconds
Transmit hold count: 6 BPDUs
PVST+ Configuration
Configuring and Verifying the Bridge ID
• Two ways to influence the root bridge election
process
• Use the spanning-tree vlan x root primary or
secondary command.
• Change the priority value by using the spanning-
tree vlan x priority x command.
• Verify the bridge ID and root bridge election by
using the show spanning-tree command.
PVST+ Configuration
PortFast and BPDU Guard
• PortFast is used on ports that
have end devices attached.
• Puts a port in the forwarding state
• Allows DHCP to work properly
• BPDU Guard disables a port that
has PortFast configured on it if a
BPDU is received
PVST+ Configuration
PVST+ Load Balancing

or

or
PVST+ Configuration
Packet Tracer – Configuring PVST+
Rapid PVST+ Configuration
Spanning Tree Mode
• Rapid PVST+ supports RSTP on a per-VLAN
basis.
• Default on a 2960 is PVST+.
• The spanning-tree mode rapid-pvst puts a
switch into Rapid PVST+ mode.
• The spanning-tree link-type point-to-point
interface command designates a particular port
as a point-to-point link (does not have a hub
attached).
• The clear spanning-tree detected-protocols
privileged mode command is used to clear STP.
Data Center Protocols
Port Channel
Port Channel
Overview

• Data center network topology is built using multiple switches.


• These switches are connected to each other using physical network links.
• When you connect two devices using multiple physical links, you can group these
links to form a logical bundle.
• The two devices connected via this logical group of ports see it as a single, big
network pipe between the two devices.
• This logical group is called EtherChannel, or port channel, and physical interfaces
participating in this group are called member ports of the group.
• A port channel is an aggregation of multiple physical interfaces that creates a
logical interface.
Port Channel
Overview

• On the Nexus 5000 Series


switch you can bundle up
to 16 links into a port
channel.
• On the Nexus 7000 Series
switch, you can bundle
eight active ports on an M-
Series module, and up to
16 ports on the F-Series
module.
Port Channel
Overview

• When a port channel is created, you will see a new interface in the switch
configuration.
• This new interface is a logical representation of all the member ports of the
port channel.
• The port channel interface can be configured with its own speed,
bandwidth, delay, IP address, duplex, flow control, maximum transmission
unit (MTU), and interface description.
• You can also shut down the port channel interface, which will result in
shutting down all member ports.
Port Channel
Advantages

There are several benefits of using port channels, and because of these benefits
you will find it commonly used within data center networks. Some of these
benefits are as follows:

• Increased capacity: By combing multiple Ethernet links into one logical link, the capacity of the link
can be increased.
• High availability: In case of a physical link failure, the port channel continues to operate even if a
single member link is alive. Therefore, it automatically increases availability of the network.
• Load balancing: The switch distributes traffic across all operational interfaces in the port channel. This
enables you to distribute traffic across multiple physical interfaces, increasing the efficiency of your
network.
• Simplified network topology: it simplifies the network topology by avoiding the STP calculation and
reducing network complexity by reducing the number of links between switches.
Port Channel
Port Channel Compatibility Requirements

• To bundle multiple switch interfaces into a port channel, these interfaces must
meet the compatibility requirements.
• Speed,
• Duplex,
• Flow-control,
• Port mode,
• VLANs,
• MTU,
• Media type
• In case of an incompatible attribute, port channel creation will fail
Port Channel
Overview

• On the Nexus platform, you can use the show port-channel compatibility-
parameters command to see the full list of compatibility checks.
• If you configure a member port with an incompatible attribute, the software
suspends that port in the port channel.
• You can force ports with incompatible parameters to join the port channel if
the following parameters are the same: Speed, Duplex and Flow-control.
EtherChannel Operation
Port Aggregation Protocol
• EtherChannels can be formed by using PAgP or LACP protocol
• PAgP (“Pag-P”) Cisco-proprietary protocol
Port Channel
Port Aggregation Protocol
• EtherChannels can be formed by using PAgP or LACP protocol
• PAgP (“Pag-P”) Cisco-proprietary protocol

Nexus switches do not support Cisco proprietary


Port Aggregation Protocol (PAgP) to create port
channels.
Port Channel
Link Aggregation Control Protocol

• LACP multivendor environment

• On Nexus 7000 switches, LACP enables


you to configure up to 16 interfaces into a
port channel.
• On M Series modules, a maximum of 8
interfaces can be in active state, and a
maximum of 8 interfaces can be placed in
a standby state.
• Starting from NX-OS Release 5.1, you can
bundle up to 16 active links into a port
channel on the F Series module.
Port Channel
Link Aggregation Control Protocol
Port Channel
Configuration

Port channel configuration on the Cisco Nexus switches includes the following
steps:
1. Enable the LACP feature. This step is required only if you are using active mode or passive
mode.
2. Configure the physical interface of the switch with the channel-group command and specify
the channel number. You can also specify the channel mode on, active, or passive within
the channel-group command. This command automatically creates an interface port
channel with the number that you specified in the command.
3. Configure the newly created interface port channel with the appropriate configuration, such
as description, trunk configuration, allowed VLANs, and so on.
Port Channel
Configuration
Port Channel
Load Balance

• Nexus switches distribute the traffic across all operational ports in a port
channel.
• The load balancing is done using a hashing algorithm that takes addresses in
the frame as input and generates a value that selects one of the links in the
channel.
• This provides load balancing across all member ports of a port channel.
Port Channel
Verifying The Port Channel Configuration

• Several show commands are available on the Nexus switch to check port channel configuration. These
commands are helpful to verify port channel configuration and troubleshoot the port channel issue
Port Channel
Virtual Port Channel vPC
• A port channel bundles multiple physical links into a logical link. All the member ports
of a port channel belong to the same network switch.
• A vPC enables the extension of a port channel across two physical switches. These two
switches work together to create a virtual domain, so a port channel can be extended
across the two devices within this virtual domain.
• It means that member ports in a virtual port channel can be from two different
network switches.
• In Layer 2 network design, Cisco vPC technology allows dual-homing of a downstream
device to two upstream switches.
• The upstream switches present themselves to the downstream device as one switch
from the port channel and Spanning Tree Protocol (STP) perspective.
Port Channel
Virtual Port Channel vPC
Port Channel
Virtual Port Channel vPC
• The limitation of the classic port channel is that it
operates between only two devices.
• In large networks with redundant devices, the
alternative path is often connected to a
different network switch in a topology that
would cause a loop.
• Virtual Port Channel (vPC) addresses this
limitation by allowing a pair of switches acting as a
virtual endpoint, so it looks like a single logical
entity to port channel–attached devices.

• The two switches that act as the logical port channel


endpoint are still two separate devices.
• These switches collaborate only for the purpose of
Figure 1-9 Virtual Port Channel Bi-Section Bandwidth
creating a vPC. This environment combines the
benefits of hardware redundancy with the benefits of
port channel.
Port Channel
Benefits Virtual Port Channel vPC

Virtual Port Channel provides the following benefits:


• Enables a single device to use a port channel across two upstream switches
• Eliminates STP blocked ports
• Provides a loop-free topology
• Uses all available uplink bandwidth
• Provides fast convergence in the case of link or device failure
• Provides link-level resiliency
• Helps ensure high availability
Port Channel Figure 1-10 Virtual Port Channel Components

Component of Virtual Port Channel vPC


• In vPC configuration, a downstream network device
that connects to a pair of upstream Nexus switches sees
them as a single Layer 2 switch.
• These two upstream Nexus switches operate
independently with their own control, data, and
management planes.
• The data planes of these switches are modified to
ensure optimal packet forwarding in vPC configuration.
• The control planes of these switches also exchange
information so it appears as a single logical Layer 2
switch to a downstream device.
Reading

• Virtual Port Channel VPC (Continue ..)


• Fabric Path
• Dynamic Fabric Automation DFA
• Overlay Transport Virtualization OTV

All these material will be available on BlackBoard (Course


Files > Readings > Chapter-1 )
Best Wishes

Data Center -1
Layer 3 Switching Features in Data Center

Eng.Sami Althagafi
salthaqafi@tvtc.gov.sa
Fall 2020/2021
What is a Layer 3 Switch?
• A layer 3 switch combines the functionality of a switch
and a router.
• It acts as a switch to connect devices that are on the
same subnet or virtual LAN at lightning speeds.
• It has IP routing intelligence built into it to double up as
a router.
• It can support routing protocols, inspect incoming
packets, and can even make routing decisions based on
the source and destination addresses. This is how a
layer 3 switch acts as both a switch and a router.
• Often referred to as a multilayer switch, a layer 3 switch
adds a ton of flexibility to a network.
Features Of A Layer 3 Switch
The features of a layer 3 switch are:

• Comes with 24 Ethernet ports, but no WAN interface.


• Acts as a switch to connect devices within the same
subnet.
• Switching algorithm is simple and is the same for most
routed protocols.
• Performs on two OSI layers — layer 2 and layer 3.
Purpose Of A Layer 3 Switch
• Layer 2 switches work well when there is low to medium traffic in VLANs. But these switches would
hang when traffic increased. So, it became necessary to augment layer 2’s functionality.
• One option was to use a router instead of a switch, but then routers are slower than switches, so this
could lead to slower performance.
• What about implementing a router within a switch to overcome this downside ?

✓ Not the ideal option because layer 2 switches


operate only on the Ethernet MAC frame while layer
3 handles multiple routing protocols
✓ It was too complicated, the idea of a layer 3
switches that acted as routers with fast forwarding
done through the underlying hardware was come
up.
Purpose Of A Layer 3 Switch
✓ This is why the main difference between layer 3
switches and routers lies in the hardware.
o A layer 3 switch’s hardware has a mix of
traditional switches and routers, except that
the routers’ software logic is replaced with
integrated circuit hardware to improve
performance.
o Also, a layer 3 switch’s router will not have WAN
ports and other WAN features you’ll typically see in
a traditional router.
Benefits Of A Layer 3 Switch
o Support routing between virtual LANs (VLANs).
o Improve fault isolation.
o Simplify security management.
o Reduce broadcast traffic volumes.
o Ease the configuration process for VLANs, as a separate router isn’t required
between each VLAN.
o Separate routing tables, and as a result, segregate traffic better.
o Simplify troubleshooting as, fixing problems in L2 layer is tedious and time-
consuming.
o Support flow accounting and high-speed scalability.
o Lower network latency as a packet doesn’t have to make extra hops to go through a
router.
Disadvantages Of A Layer 3 Switch
• Cost
✓ It costs much more than a traditional switch and configuring and administering these switches also requires
more effort.
✓ An organization should be ready to spend extra resources to set up layer 3 switches.
• Limited application
✓ for large intranet environments with many device subnets and traffic
• Lack of WAN functionality
✓ Both routers and layer 3 switches are used for routing traffic within and outside your organization.
• Multiple tenants and virtualization
✓ Layer 3 routing is relatively slower than layer 2; this can be an issue when you want to span VLAN over
multiple switches for supporting multiple tenants and virtualization.
• Lack of flexibility
✓ One VLAN will be associated with one switch and can’t be used on other switches.
✓ This limitation means you have to plan well to avoid one LAN from using multiple switches.
First Hop Redundancy
Protocol
What is a Default Gateway
• The default gateway (DG) is a critical component in networking because it provides the function of
forwarding packets to different subnets.
✓ It is important to understand that the DG is not leverage when two hosts are in the same subnet and want
to communicate with each other.
✓ The configuration of the default gateway is a key component in designing a network, just like it is with IP
allocation.
o because it allows the hosts in the network to only have to configure the default gateway instead of the
entire routing table or network topology.
• It is important that every network engineer understands how the hosts in the network know whether
or not to send packets to the default gateway. This is based on the IP addresses and subnets
configured in the host.
✓ In order to understand how this function is performed, you need to understand some binary operations.
What is a Default Gateway
• The default gateway (DG) is a critical component in networking because it provides the function of
forwarding packets to different subnets.
✓ It is important to understand that the DG is not leverage when two hosts are in the same subnet and want
to communicate with each other.
✓ The configuration of the default gateway is a key component in designing a network, just like it is with IP
allocation.
o because it allows the hosts in the network to only have to configure the default gateway instead of the
entire routing table or network topology.
• It is important that every network engineer understands how the hosts in the network know whether
or not to send packets to the default gateway. This is based on the IP addresses and subnets
configured in the host.
✓ In order to understand how this function is performed, you need to understand some binary operations.
What is a Default Gateway

• The binary AND and XOR operations taking 2 bits and returning 1 bit.
▪ The AND operation will return 1 only if both inputs are 1;
otherwise, the operation will return 0.
▪ In the case of the XOR operation, the result will be 1 if and only
if one of the inputs is 1 and the other is 0; otherwise, if both inputs
are the same, the operation returns 0.
What is a Default Gateway

• The network stack on Host A manipulate the bits to decide


whether the destination is in the same subnet or not.
• Because the final AND result is 0, Host B (destination) is
local, so there is no need to send the packets to DG.
• Host A and Host B will create the packet with the MAC
address of Host B. Host A learns the MAC address via ARP.
What is a Default Gateway
What is a Default Gateway
▪ The default gateway becomes critical when
designing networks because every host in the
network must be configured with a default
gateway in order to be able to communicate
outside its own LAN.
▪ There is only a single default gateway.
Therefore, the question you should be asking
yourself is,
✓ what happens if the default gateway
fails?
✓ How can we introduce redundancy into
the environment and maintain the default
gateway?
First-Hop Redundancy Protocol
▪ Because of the importance of the default gateway in the network, it is equally important to provide a
level of redundancy for the hosts in case of failures.
▪ First-Hop Redundancy Protocol (FHRP) is the networking protocol that provides the level of
redundancy needed in the network in order to provide redundancy for the default gateway by allowing
two or more routers to provide backup in case of failures.
▪ list of the FHRPs
o Hot Standby Routing Protocol (HSRP) for IPv4
o Virtual Router Redundancy Protocol (VRRP)
o Gateway Load Balancing Protocol (GLBP)
Hot Standby Routing Protocol (HSRP)
▪ HSRP was developed by Cisco in the middle of the 1990s to provide the level of redundancy needed for
high-demand networks.
▪ The idea behind HSRP is to provide a redundant mechanism for the default gateway in case of any
failures by ensuring the traffic from the host is immediately switched over to the redundant default
gateway.
▪ HSRP works by sharing the virtual IP (VIP) address and the MAC address between two or more
routers in order to act as a “single virtual default gateway.”
▪ An election process occurs whereby the router with the highest priority will be the “active default
gateway.”
✓ By default, this priority is set to 100.
✓ It is important to know that the host must configure its default gateway to the VIP address via
static configuration or DHCP.
Hot Standby Routing Protocol (HSRP)
▪ The routers or members participating in the same HSRP
group have a mechanism to exchange their status in
order to guarantee that there is always a default gateway
available for the hosts in the local subnet.
✓ The “Hello HSRP” packets are sent to and received by a
multicast UDP address.
✓ The active router is continuously sending these “hello
packets” in case the active router is not able to send them.
✓ The standby router will become active after a configurable
period of time.
✓ It is important to mention that the hosts will never notice a
change in the network and will continue forwarding the
traffic as they were previously doing.
Figure 5: HSRP Topology
Hot Standby Routing Protocol (HSRP)
▪ To enable HSRP in Cisco NX-OS, feature hsrp must be
enabled globally

▪ HSRP Versions

Figure 5: HSRP Topology


Hot Standby Routing Protocol (HSRP)
• Based on the HSRP version configured, the
routers will start sending and receiving HSRP
hello packets
✓ The active router will send the HSRP hello packets
to 224.0.0.2 or 224.0.0.102 (depending on the
HSRP version) with the HSRP virtual MAC address,
while the standby router(s) will source the HSRP
hellos with the interface MAC address.
✓ By default, Cisco NX-OS supports HSRP version 1,
and the packet formats are different between the
two versions.
✓ For networks that are running IPv4, the user must
decide which version of HSRP they want to use.
Figure 5: HSRP Topology
Hot Standby Routing Protocol (HSRP)
• HSRP Messages: Op Code
o The HSRP protocol defines three types of op-codes in the type of
message that is contained in the packet:
✓ Hello: This is op-code 0; hello messages are sent by the routers
participating in HSRP to indicate that they are capable of becoming the
active or standby router.
✓ Coup: This is op-code 1; coup messages are sent by the standby router
when the router wishes to become the active router.
✓ Resign: This is op-code 2; this particular message is sent by the active
router when it no longer wants to be the active router.
o Here are some cases when a router will send these types of messages:
✓ The active router is about to shut down Figure 5: HSRP Topology
✓ When the active router receives a hello packet with a higher priority
✓ When the active router receives a coup message
Hot Standby Routing Protocol (HSRP)
• HSRP Authentication:
o In order to provide security from HSRP spoofing, HSRP version 2
introduces HSRP Message Digest 5 (MD5) algorithm authentication
and plain-text authentication.
o HSRP includes the IPv4 or IPv6 address in the authentication TLVs.

• HSRP Object Tracking:


o What happens if the active router, E1/1, goes down? .
o HSRP state has not changed because the active and the standby routers
are still exchanging hellos.
o The hosts in the LAN will still continue sending packets to the active
router, and then the active router will send the packets to the standby
router to go outside the LAN.
o Because of this failure, the packets from the hosts to the outside will have an
extra hop penalty, which is the reason why object tracking was introduced. Figure 6: HSRP Object Tracking
Hot Standby Routing Protocol (HSRP)
• HSRP Object Tracking:
o The primary function of this feature is to track the interface of the
active routers in case of any failures.
o In case the active router interface Eth1/1 goes down:
o the router will detect this failure.
o and decrease the standby priority to be lower than the active
router, which will cause the standby router to become the active
router.

• HSRP Preempt:
o What happens if the router (10.1.1.2) interface Eth1/1 goes up again?
✓ Use preempt command to decrement the priority of router
(10.1.1.3), this will make the router (10.1.1.2) an Active again.

Figure 6: HSRP Object Tracking


Virtual Router Redundancy Protocol (VRRP)
▪ Virtual Router Redundancy Protocol (VRRP) is an open-standard alternative to Cisco’s HSRP,
providing almost identical functionality.
▪ VRRP specifies an election protocol that dynamically assigns responsibility for a virtual router to one of
the VRRP routers on a LAN
▪ The VRRP router controlling the IP address(es) associated with a virtual router is called the Master.
▪ The election process provides dynamic failover in the forwarding responsibility should the Master
become unavailable.
▪ The advantage gained from using VRRP is a higher availability default path without requiring
configuration of dynamic routing or router discovery protocols on every end-host.
▪ The other routers participating in the VRRP process will become the backups.
▪ VRRP master sends periodic advertisements using a multicast address of 224.0.0.18 with an IP
protocol of 112.
✓ The advertisement announcements communicate the priority and state of the master.
▪ HSRP and VRRP are very similar. The main idea is to provide an appropriate redundancy level to the
default gateway
Virtual Router Redundancy Protocol (VRRP)
▪ The hosts send their traffic to the virtual router based on a
configuration obtained statically or via DHCP.
▪ To enable VRRP in Cisco NX-OS, you must enable feature
VRRP globally
▪ The virtual MAC address for VRRP is derived from the range
0000.5e00.0000 to 0000.5e00.00ff

• VRRP Authentication:
▪ VRRP supports only plain-text authentication

• VRRP Object Tracking:


▪ Similar to HSRP, VRRP supports object tracking.
Gateway Load Balancing Protocol (GLBP)
▪ Gateway Load Balancing Protocol (GLBP) is a Cisco
technology that improves the functionality of HSRP
and VRRP.
▪ GLBP introduces a load-balancing mechanism over
multiple routers.
▪ GLBP takes advantage of every router in the virtual
group in order to share the forwarding of the traffic,
which is different from HSRP and VRRP.
▪ The GLBP members communicate by using a hello
message, which uses a multicast address of
224.0.0.102 with UDP port of 3222.
Gateway Load Balancing Protocol (GLBP)
▪ GLBP elects an active virtual gateway (AVG) if multiple
gateways have the same priority; the router with the
highest real IP address becomes the AVG.
✓ The rest of the members are put into a listen state.
▪ The function of the AVG is:
1. Assign the virtual MAC address to each member of the
GLBP group. Then, each router in the virtual group will
become an active virtual forwarder (AVF) for its
assigned virtual MAC address;
✓ it will be responsible for forwarding packets sent to its
assigned virtual MAC address.
✓ This means that each member of the GLBP group becomes
the virtual forwarder (VF) for the assigned MAC address; this
is called the primary virtual forwarder (PVF).
2. Another function of the AVG is to answer the ARP
requests sent to the virtual IP address.
Gateway Load Balancing Protocol (GLBP)
▪ In Figure 20-12:
✓ the Ethernet 1/2 interface on Router 1 is the gateway for Host
1 (the AVF for virtual MAC address, vMAC1), whereas
Ethernet 2/2 on Router 2 acts as a secondary virtual
forwarder for Host 1.
✓ Ethernet 1/2 tracks Ethernet 3/1, which is the network
connection for Router 1.
✓ If Ethernet 3/1 goes down, the weighting for Ethernet 1/2
drops to 90, and Ethernet 2/2 on Router 2 preempts
Ethernet 1/2 and takes over as AVF because it has the default
weighting of 100 and is configured to preempt the AVF.

▪ To enable GLBP in Cisco NX-OS, enable feature glbp globally


▪ As with HSRP and VRRP, GLBP also supports the object-
tracking function in order to track line-protocol or IP
routing. The configuration is identical to HSRP and VRRP.
IP Protocols on Nexus
Switch
IPv4 Routing Concepts

▪ IP routing delivers IP packets from the sending host to the destination


host.
✓ The complete end-to-end routing process relies on network layer logic on
hosts and on routers.
✓ The sending host uses Layer 3 concepts to create an IP packet, forwarding
the IP packet to the host’s default gateway (default router), while the
routers compare the destination address in the packet to their routing
tables, to decide where to forward the IP packet next.
✓ The routing process also relies on the data link and physical details at each
link.
▪ To move the IP packets around the TCP/IP network by encapsulating
and transmitting the packets inside data link layer frames.
Routing Process
▪ The routing process starts with the host that creates the IP
packet
✓ Step 1. If the destination is local, send directly:
A. Find the destination host’s MAC address. Use the already known
Address Resolution Protocol (ARP) table entry, or use ARP
messages to learn the information.
B. Encapsulate the IP packet in a data-link frame, with the
destination data-link address of the destination host.

✓ Step 2. If the destination is not local, send to the default gateway:


A. A. Find the default gateway’s MAC address. Use the already
known ARP table entry, or use ARP messages to learn the
information.
B. Encapsulate the IP packet in a data-link frame, with the
destination data-link address of the default gateway
Routing Process
▪ Routers have a little more routing work to do as compared to hosts.
Routing Process

▪ Now on to the example, with host A (172.16.1.9) sending a packet to host B (172.16.2.9).
Routing Process
▪ Router R1 processes the frame and packet, as shown with the numbers in the figure matching the same five-step
process described just before the figure, as follows:
Cisco Nexus Switch Operations with Routing

▪ The two major differences between multilayer and


Layer 2 switches are:
1. Multilayer switches traditionally have been built to
give you the ability to have multiple VLANs on a
given switch and route between them if needed
without the need for an external dedicated router.
2. Layer 3 switch can implement a switched virtual
interface (SVI) or a Layer 3 routed interface.
▪ Cisco Nexus switches have both Layer 2 and Layer 3
functionality and are considered to be multilayer
switches
Routing Protocols on Nexus Device
▪ Cisco Nexus IPv4 Routing
o To make the router be ready to route packets on a particular interface:
o The router must be configured with an IP address
o and the interface must be configured such that it comes up, reaching a “line status up, line protocol up”
state.
Only at that point can routers route IP packets in and out a particular interface.
Routing Protocols on Nexus Device
▪ Cisco Nexus IPv4 Routing
o After a router can route IP packets out one or more interfaces, the router needs some routes. Routers can add
routes to their routing tables through three methods:
✓ Connected routes: Added because of the configuration of the IP address interface subcommand on the
local router.
✓ Static routes: Added because of the configuration of the IP route global command on the local router.
✓ Routing protocols: Added as a function by configuration on all routers, resulting in a process by which
routers dynamically tell each other about the network so that they all learn routes.
o A Cisco Nexus L3 switch automatically adds two routes to its routing table based on the IPv4 address configured
for an interface, assuming that the following two facts are true:
✓ The interface is in a working state.
✓ The interface has an IP address assigned through the IP address interface subcommand.
The two routes, called a direct route and a local route, route packets to the subnet directly connected to that
interface
Routing Protocols on Nexus Device
▪ Cisco Nexus IPv4 Routing
o There are two ways to configure routing on a
Cisco Nexus switch:
1) A routed interface: This is enabled by using the
no switchport command. Remember that when
using this command, we are disabling any Layer 2
functionality on an interface.
2) A switched virtual interface (SVI): You use
this when you route between VLANs and support
Layer 2 with Layer 3 simultaneously.
Routing Protocols on Nexus Device
▪ Routing Between Subnets on VLANs
o Three options exist for connecting a router to each
subnet on a VLAN:
1) Use a router, with one router LAN interface and cable
connected to the switch for each and every VLAN (typically
not used).
2) Use a router, with a VLAN trunk connecting to a LAN switch.
3) Use a Layer 3 switch.
✓ The first option requires too many interfaces and links.
✓ The third option uses a Layer 3 switch which is one
device that performs two primary functions: Layer 2
LAN switching and Layer 3 IP routing.
Routing Protocols on Nexus Device

▪ Routing Between Subnets on VLANs


o The following steps show how to configure Cisco Nexus
Layer 3 switching:
1) Enable the feature for configuring interface VLANs (feature
interface-vlan).
2) Create a VLAN interface for each VLAN for which the Layer 3
switch is routing packets (interface vlan vlan_id).
3) Configure an IP address and mask on the VLAN interface (in
interface configuration mode for that interface), enabling IPv4
on that VLAN interface (ip address address mask).
4) If the switch defaults to placing the VLAN interface in a
disabled (shutdown) state, enable the interface (no shutdown)
Routing Protocols on Nexus Device
▪ Static Route Configuration
o NX-OS allows the definition of individual static routes using the ip route global configuration command.
o Every ip route command defines a destination that can be matched, usually with a subnet ID and mask.
o The command also lists the forwarding instructions, typically listing either the outgoing interface or the next-
hop router’s IP address.
o NX-OS then takes that information and adds that route to the IP routing table
o To list the static route details only, use show ip route static command.
Routing Protocols on Nexus Device
▪ Static Route Configuration
Routing Protocols on Nexus Device
▪ Static Default Route Configuration
o When a router tries to route a packet, the router
might not match the packet’s destination IP
address with any route. When that happens, the
router normally just discards the packet.
o The default route matches all packets, so that if a
packet does not match any other more specific
route in the routing table, the router can at least
forward the packet based on the default route.
o NX-OS allows the configuration of a static default
route by using special values for the subnet and
mask fields in the ip route command: 0.0.0.0 and
0.0.0.0.
✓ For example, the command ip route 0.0.0.0 0.0.0.0
vlan 16 creates a static default route on a Cisco
Nexus switch—a route that matches all IP packets—
and sends those packets out SVI VLAN 16
Routing Protocols on Nexus Device
▪ Static Default Route Configuration
o When a router tries to route a packet, the router
might not match the packet’s destination IP
address with any route. When that happens, the
router normally just discards the packet.
o The default route matches all packets, so that if a
packet does not match any other more specific
route in the routing table, the router can at least
forward the packet based on the default route.
o NX-OS allows the configuration of a static default
route by using special values for the subnet and
mask fields in the ip route command: 0.0.0.0 and
0.0.0.0.
✓ For example, the command ip route 0.0.0.0 0.0.0.0
vlan 16 creates a static default route on a Cisco
Nexus switch—a route that matches all IP packets—
and sends those packets out SVI VLAN 16
Routing Protocols on Nexus Device
▪ Routing Protocols
o Each routing protocol causes routers (and Layer 3 switches) to do the following:
1. Learn routing information about IP subnets from other neighboring routers.
2. Advertise routing information about IP subnets to other neighboring routers.
3. Choose the best route among multiple possible routes to reach one subnet, based on that routing protocol’s
concept of a metric
4. React and converge to use a new choice of best route for each destination subnet when the network topology
changes—for example, when a link fails

o Interior Gateway Routing Protocol


✓ RIP
✓ EIGRP
✓ OSPF
Routing Protocols on Nexus Device
▪ Routing Protocols
o Comparing IGRP:
Here are a few key comparison points:
✓ The underlying routing protocol algorithm:
DV or LS.
✓ The usefulness of the metric: which route is
best based on its metric.
✓ The speed of convergence: How long does it
take all the routers to learn about a change in
the network and update their IPv4 routing
tables?
✓ Whether the protocol is a public standard or
a vendor-proprietary function.
Routing Protocols on Nexus Device
▪ Routing Protocols
o Comparing IGRP:
Routing Protocols on Nexus Device
▪ Routing Protocols
o Comparing IGRP:

▪ Reading Task:
▪ Read Chapter 18 for more information about Routing Protocols.
▪ Pages 477 to 493
▪ This Chapter will be available on Blackboard.
Routing Protocols on Nexus Device
▪ RIPv2 Configuration On Nexus
1) RIPv2 enabled via routed interfaces
2) Based on SVI being enabled as the Layer 3
interface for routing protocol participation.

▪ RIPv2 Configuration via Routed Interface


1) Enable RIP globally using feature rip.
2) Put the physical interface into a strict L3-only
mode using no switchport command.
3) Assign IP address to that interface.
4) Enable the routing process for RIP using the ip
router rip enterprise command.
5) Make sure the interface is up using no shutdown.
6) Enable the router rip instance-tag to identify that
all interface use RIP.
Routing Protocols on Nexus Device
▪ RIPv2 Configuration On Nexus
Routing Protocols on Nexus Device
▪ RIPv2 Configuration via SVI
Routing Protocols on Nexus Device
▪ RIPv2 verification

o uses show commands to validate that the neighboring Layer 3


Nexus switches are communicating with each other and
learning routing via the RIPv2 routing protocol.
o show running-config rip is an important command for ensuring
that you globally configured the instance-tag correctly and on
the interfaces you want enabled for RIP.
o show ip rip neighbor command enables you to see that the
neighbors have come up, the IP addresses with which you are
peering, and how many you have.
o All these command can be used with both ways : routed
interface and SVI.
Routing Protocols on Nexus Device
▪ EIGRP Configuration On Nexus
1) EIGRP enabled via routed interfaces OR
2) Based on SVI being enabled as the Layer 3
interface for routing protocol participation.

▪ EIGRP Configuration via Routed Interface


1) Enable EIGRP globally using feature rip.
2) Enable the routing process for EIGRP using the router
eigrp process_no command
3) use of the router-id x.x.x.x under the router eigrp process.
4) Put the physical interface into a strict L3-only mode
using no switchport command.
5) Assign IP address to that interface.
This figure will be used as an example for EIGRP config.
6) Make sure the interface is up using no shutdown.
7) Enable the routing process for EIGRP under the
physical interface using ip router eigrp process_no
Routing Protocols on Nexus Device

• Router IDs are important for identifying the router with a unique
identifier in the routing protocol.
• Each router will have a unique router ID that can be manually
configured or dynamically assigned based on a selection process.
• In NX-OS, the process for selecting the router ID is as follows:
This figure will be used as an example for EIGRP config.
Routing Protocols on Nexus Device
▪ EIGRP Configuration via SVI

This figure will be used as an example for EIGRP config.


Routing Protocols on Nexus Device
▪ EIGRP Verification using show command
• Three different show commands for validating and
troubleshooting in EIGRP:
✓ show running-config eigrp
✓ show ip eigrp
✓ sh ip eigrp neighbors 10.1.1.2
Routing Protocols on Nexus Device
▪ OSPF Configuration On Nexus
OSPF enabled via routed interfaces

This figure will be used as an example for OSPF config.

The area tag at the end of the ip router ospf 1 area 0 command
shows one difference from the RIP and EIGRP examples:.
Routing Protocols on Nexus Device
▪ OSPF Configuration On Nexus
OSPF enabled via SVI

This figure will be used as an example for OSPF config.


Routing Protocols on Nexus Device
▪ EIGRP Verification using show command
• Three different show commands for validating and
troubleshooting in EIGRP:
✓ show running-config ospf
✓ show ip ospf process_id
✓ sh ip ospf neighbor vlan vlan_Id
✓ sh ip ospf interface vlan vlan_Id
Routing Protocols on Nexus Device
▪ IP Multicast

✓ RIPv2 : 224.0.0.9
✓ EIGRP: 224.0.0.10
✓ OSPF: 224.0.0.5

• Multicast addresses: Frames sent to a multicast Ethernet address will be copied and
forwarded to a subset of the devices on the LAN that volunteers to receive frames sent to a
specific multicast address.
Best Wishes

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy