Dcict Datacenter Part1 CCSP - Ir
Dcict Datacenter Part1 CCSP - Ir
Dcict Datacenter Part1 CCSP - Ir
Introducing Cisco
Data Center
Technologies
Volume 1
Version 1.0
Student Guide
Text Part Number: 97-3181-01
Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this
URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a
partnership relationship between Cisco and any other company. (1110R)
DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED AS IS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES
IN CONNECTION WITH THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER
PROVISION OF THIS CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL
IMPLIED WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A
PARTICULAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. This learning product
may contain early release content, and while Cisco believes it to be accurate, it falls subject to the disclaimer above.
Student Guide
Welcome to Cisco Systems Learning. Through the Cisco Learning Partner Program,
Cisco Systems is committed to bringing you the highest-quality training in the industry.
Cisco learning products are designed to advance your professional goals and give you
the expertise you need to build and maintain strategic networks.
Cisco relies on customer feedback to guide business decisions; therefore, your valuable
input will help shape future Cisco course curricula, products, and training offerings.
We would appreciate a few minutes of your time to complete a brief Cisco online
course evaluation of your instructor and the course materials in this student kit. On the
final day of class, your instructor will provide you with a URL directing you to a short
post-course evaluation. If there is no Internet access in the classroom, please complete
the evaluation within the next 48 hours or as soon as you can access the web.
On behalf of Cisco, thank you for choosing Cisco Learning Partners for your
Internet technology training.
Sincerely,
Cisco Systems Learning
Table of Contents
Volume 1
Course Introduction
Overview
Learner Skills and Knowledge
Course Goal and Objectives
Course Flow
Additional References
Cisco Glossary of Terms
Cisco Online Education Resources
Cisco NetPro Forums
Cisco Learning Network
Introductions
1
1
2
3
4
5
6
7
8
9
10
1-1
1-1
1-1
1-3
Overview
Objectives
Traditional Isolated LAN and SAN Networks
Cisco Data Center Infrastructure
Cisco Data Center Infrastructure Topology Layout
LAN Core, Aggregation, and Access Layers
Core and Access Layers in a LAN Collapsed-Core Design
Example: Collapsed Core in Traditional Network
Example: Collapsed Core in Routed Network
Core and Edge Layers in a Data Center SAN Design
Collapsed-Core SAN Design
Summary
1-3
1-3
1-4
1-11
1-13
1-14
1-18
1-18
1-19
1-20
1-23
1-25
1-27
1-27
1-27
1-28
1-31
1-32
1-33
1-34
1-37
1-38
1-39
1-42
1-44
1-44
1-44
1-45
1-46
1-46
1-48
1-48
1-49
1-49
1-54
1-55
1-59
1-61
1-65
1-70
1-73
1-80
1-84
1-87
1-87
1-87
1-88
1-90
1-92
1-92
1-92
1-94
1-98
1-99
1-101
1-101
1-101
1-102
1-102
1-103
1-105
1-107
1-107
1-107
1-109
1-110
1-111
1-111
1-111
1-112
1-113
1-116
1-120
1-123
1-129
1-131
1-133
1-135
1-136
1-137
1-152
1-153
1-153
1-153
1-154
1-165
1-168
1-176
1-178
1-179
1-179
1-179
1-180
1-193
1-197
1-199
1-200
2012 Cisco Systems, Inc.
Module Self-Check
Module Self-Check Answer Key
1-203
1-207
2-1
Overview
Module Objectives
2-1
2-1
2-3
Overview
Objectives
Describing VDCs on the Cisco Nexus 7000 Series Switch
Verifying VDCs on the Cisco Nexus 7000 Series Switch
Navigating Between VDCs on the Cisco Nexus 7000 Series Switch
Describing NIV on Cisco Nexus 7000 and 5000 Series Switches
Summary
Virtualizing Storage
2-3
2-3
2-4
2-15
2-19
2-20
2-36
2-37
Overview
Objectives
LUN Storage Virtualization
Storage System Virtualization
Host-Based Virtualization
Array-Based Virtualization
Network-Based Virtualization
Storage Virtualization
Summary
2-37
2-37
2-38
2-39
2-43
2-43
2-43
2-44
2-47
2-49
Overview
Objectives
Benefits of Server Virtualization
VM Partitioning
VM Isolation
VM Encapsulation
VM Hardware Abstraction
Available Data Center Server Virtualization Solutions
Overview
Virtual Partitions
Cluster Shared Volumes
Live Migration Feature
Summary
2-49
2-49
2-50
2-56
2-56
2-56
2-57
2-58
2-69
2-69
2-70
2-71
2-72
2-73
Overview
Objectives
Limitations of VMware vSwitch
Advantages of VMware vDS
How the Cisco Nexus 1000V Series Switch Brings Network Visibility to the VM Level
How the VSM and VEM Integrate with VMware ESX or ESXi and vCenter
Summary
Verifying Setup and Operation of the Cisco Nexus 1000V Series Switch
2-73
2-73
2-74
2-84
2-90
2-100
2-110
2-111
Overview
2-111
Objectives
2-111
Verifying the Initial Configuration and Module Status on the Cisco Nexus 1000V Series Switch 2-112
Verifying the VEM Status on the ESX or ESXi Host
2-118
Validating VM Port Groups
2-123
Summary
2-128
Module Summary
2-129
References
2-130
Module Self-Check
2-133
Module Self-Check Answer Key
2-136
2012 Cisco Systems, Inc.
iii
iv
DCICT
Course Introduction
Overview
Introducing Cisco Data Center Technologies (DCICT) v1.0 is designed to provide students
with foundational knowledge and a broad overview of Cisco data center products and their
operation.
The course covers the architecture, components, connectivity, and features of a Cisco data
center network.
The student will gain practical experience configuring the initial setup of Cisco Nexus 7000 9Slot Switch, Cisco Nexus 5548UP Switch, Cisco Unified Computing System (UCS) 6120XP
20-Port Fabric Interconnect, and Cisco MDS 9124 Multilayer Fabric Switch. Students will also
learn to verify the proper operation of a variety of features such as Overlay Transport
Virtualization (OTV), Cisco FabricPath, port channels, virtual port channels (vPC), and Cisco
Nexus 1000v Distributed Virtual Switch for VMware ESX.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.03
Course Goal
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.04
Upon completing this course, you will be able to meet these objectives:
Course Introduction
Course Flow
This topic presents the suggested flow of the course materials.
A
M
Day 1
Day 2
Course
Introduction
Lab 1-5
Day 3
Day 4
Lab 1-6
Day 5
Cisco UCS (Cont.)
Activity 5-1
Activity 5-2
Lab 5-1
Lab 5-2
Lunch
Cisco Data Center
Network Services
(Cont.)
Activity 1-1
P
M
Activity 1-2
Lab 1-1
Activity 1-3
Lab 2-2
Lab 3-1
Lab 2-3
Lab 3-2
Lab 1-2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Cisco UCS
Lab 3-3
DCICTv1.05
The schedule reflects the recommended structure for this course. This structure allows enough
time for the instructor to present the course information and for you to work through the lab
activities. The exact timing of the subject materials and labs depends on the pace of your
specific class.
Additional References
This topic presents the Cisco icons and symbols that are used in this course, as well as
information on where to find additional technical references.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.07
Workstation
Fibre Channel Tape
Subsystem
Application Server
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.08
Course Introduction
http://www.cisco.com/go/pec
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.010
Cisco Partner Education Connection (PEC) provides training on products, tools, and solutions
to help you keep ahead of the competition as a Cisco Partner. Achieve and advance your
partnership status for your organization by following the training curriculum that is required for
career certifications and technology specializations. Access is easy. Any employee of an
authorized Cisco Channel Partner company can request a personalized Cisco.com login ID.
Most courses on PEC are free. Fees for instructor-led classes, proctored exams, and
certification exams are noted on the site.
Partners report that PEC helps decrease travel expenses while increasing productivity and
sales.
Course Introduction
https://supportforums.cisco.com/community/netpro
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.011
Cisco NetPro forums are part of the online Cisco Support Community. NetPro forums are
designed to share configurations, issues, and solutions among a community of experts. The
forums are conveniently arranged into distinct subject matter expertise categories to make
finding or supplying solutions a simple process.
https://learningnetwork.cisco.com
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.012
The Cisco Learning Network is a one-stop repository where certification seekers can find the
latest information on certification requirements and study resources, and discuss certification
with others. Whether you are working toward certification at the Associate, Professional or
Expert level, the Cisco Learning Network is always available to assist with reaching your
certification goals.
Course Introduction
Introductions
This topic presents the general administration of the course, and an opportunity for student
introductions.
Class-related
Facilities-related
Sign-in sheet
Participant materials
Restrooms
Attire
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.014
The instructor will tell you about specific site requirements and the location of restrooms, break
rooms, and emergency procedures.
10
Your name
Your company
Prerequisite skills
Brief history
Objective
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.015
Be prepared to introduce yourself to the class and discuss your experience, environment, and
specific learning goals for the course.
Course Introduction
11
12
Module 1
Module Objectives
Upon completing this module, you will be able to describe the features of the data center
switches for network and SAN connectivity and their relationship to the layered design model.
This ability includes being able to meet these objectives:
Explain the functions of topology layers in Cisco data center LAN and SAN networks
Describe the features of Cisco Nexus 7000 and 5000 Series Switches, and the Cisco Nexus
2000 Series Fabric Extenders and their relationship to the layered design model
Describe the features of the Cisco MDS Fibre Channel switches and their relationship to
the layered design model
Perform and initial configuration and validate common features of the Cisco Nexus 7000
and 5000 Series Switches
Describe vPCs and Cisco FabricPath and how to verify their operation on Cisco Nexus
7000 and 5000 Series Switches
1-2
Lesson 1
Objectives
Upon completing this lesson, you will be able to describe the functional layers of the Cisco data
center LAN and SAN networks. This ability includes being able to meet these objectives:
Describe the core, aggregation, and access layers in the data center and their functions
Describe the reasons for combining the core and aggregation layers in a LAN design
Describe the core and edge layers in a data center SAN design
Describe the reasons for combining the core and edge layers in a SAN design
Traditional Deployment
LAN
SAN A
SAN B
Ethernet
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Fibre Channel
DCICTv1.01-4
Traditionally, LAN and SAN infrastructures have been separated for various reasons. These
issues are some of the reasons:
1-4
Security: Separation helps ensure that data that is stored on the SAN appliances cannot be
corrupted through normal TCP/IP hacking methodologies.
Bandwidth: Initially, higher bandwidth was more prevalent on SAN infrastructures than
on LAN infrastructures.
Flow Control: SAN infrastructures use a buffer-to-buffer flow control that ensures that
data is delivered in order and without any loss in transit unlike the TCP/IP flow control
methodology.
Lost Packets
Data
Tx
Pause
Data
Rx
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-5
Flow control is a mechanism that is used to ensure that frames are sent only when there is
somewhere for them to go. Just as traffic lights are used to control the flow of traffic in cities,
flow control manages the data flow in a LAN or SAN environment.
Some data networks, such as Ethernet, use a flow control strategy that can result in degraded
performance:
When the Rx port buffers are completely filled and cannot accept any more packets, the Rx
port tells the Tx port to stop or slow the flow of data.
After the Rx port has processed some data and has some buffers available to accept more
packets, it tells the Tx port to resume sending data.
This strategy results in lost packets when the Rx port is overloaded, because the Rx port tells
the Tx port to stop sending data after it has already overflowed. All lost packets must be
retransmitted. The retransmissions degrade performance. Performance degradation can become
severe under heavy traffic loads.
1-5
Benefits:
- Prevents loss of frames that are caused by buffer overruns
- Maximizes performance under high loads
Rx port has 1 free
buffer.
Tx
Rx
Ready
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-6
To improve performance under high traffic loads, Fibre Channel uses a credit-based flow
control strategy. This means that the receiver must issue a credit for each frame that is sent by
the transmitter before that frame can be sent.
A credit-based strategy ensures that the Rx port is always in control. The Rx port must issue a
credit for each frame that is sent by the transmitter. This strategy prevents frames from being
lost when the Rx port runs out of free buffers. Preventing lost frames maximizes performance
under high-traffic load conditions because the Tx port does not have to resend frames.
The figure shows a credit-based flow control process:
1-6
Before the Tx port can send a frame, the Rx port must notify the Tx port that the Rx port
has a free buffer and is ready to accept a frame. When the Tx port receives the notification,
it increments its count of the number of free buffers at the Rx port.
The Tx port sends frames only when it knows that the Rx port can accept them.
Campus Core
Data Center
Core
Service
Modules
Data Center
Aggregation
Layer 2 Clustering
and NIC Teaming
Blade Chassis
with Pass-Through
Blade Chassis
with Integrated
Switch
Mainframe
with OSA
Layer 3
Access
OSA = Open
Systems
Adapter
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-7
The figure shows a typical large enterprise data center design. The design follows the Cisco
multilayer infrastructure architecture, including core, aggregation (or distribution), and access
layers.
The data center infrastructure must provide the following important features and services:
Port density
Security services that are provided by access control lists (ACLs), firewalls, and intrusion
prevention systems (IPSs) at the data center aggregation layer
Server farm services such as content switching, caching, and Secure Sockets Layer (SSL)
offloading
Multitier server farms, mainframes, and mainframe services (such as Telnet 3270
[TN3270], load balancing, and SSL offloading)
Network devices are often deployed in redundant pairs to avoid a single point of failure.
1-7
Manageable:
- Small number of switches
Simple Design:
- Single hop
- Low latency
Limited scalability
FC = Fibre
Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-8
A single tier design has one layer of switches connecting servers to storage. It is relatively
inexpensive and easy to manage because there are usually a few switches.
1-8
Multitier Design
Scalable:
- Just add more edge switches when required
Resilient design:
- Multiple paths
FC = Fibre
Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-9
In the multitier design that is shown in the figure, the following connections occur:
Storage targets connect to one set of switches and are often locally attached to the core
switches.
Hosts, often remote, connect into the fabric through a separate layer of switches called edge
switches.
The multitier design supports multiple layers of switches and thus can scale significantly,
although many ports are used for interswitch links (ISLs). The major advantage of this design is
its simplicity, which makes it relatively easy to visualize and troubleshoot. The multitier design
is good where storage is centralized, except where edge switches are located at satellite
locations. Like core-edge fabrics, this design has a high inherent oversubscription because of
the hierarchical design. This oversubscription must be offset by adding ISLs, which decreases
the effective port count.
1-9
Mainframe
Client-Server and
Distributed Computing
Service-Oriented and
Web 2.0-Based
ITRelevanceandControl
Consolidate
Virtualize
Automate
Decentralized
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Virtualized
DCICTv1.01-10
This figure shows how data centers have changed in the last two decades. At first, data centers
were monolithic and centralized, employing mainframes and terminals, which the users used to
perform their work on the mainframe. The mainframes are still used in the finance sector
because they are an advantageous solution in terms of availability, resilience, and service level
agreements (SLAs).
The second era emphasized client-server and distributed computing, with applications being
designed in such a way that the user used client software to access an application. Also,
services were distributed because of poor computing ability and high link cost. Mainframes
were too expensive to serve as an alternate solution.
Currently, with communication infrastructure being relatively cheaper, and with an increase in
computing capacities, data centers are being consolidated. Consolidation is occurring because
the distributed approach is expensive in the long term. The new solution uses equipment
virtualization, resulting in a much higher utilization of servers than in the distributed approach.
The new solution also brings a significantly higher return on investment (ROI) and lower total
cost of ownership (TCO).
1-10
Virtualization
Unified
Computing
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Unified
Fabric
DCICTv1.01-11
Virtualization
Cisco Virtual Network Link (Cisco VN-Link) technologies, including the Cisco
Nexus 1000V Distributed Virtual Switch for VMware ESX or ESXi, deliver
consistent per-virtual-machine visibility and policy control for SAN, LAN, and
unified fabric.
Virtual SAN, virtual device contexts (VDCs), and unified fabric help multiple
virtual networks converge to simplify and reduce data center infrastructure and
TCO.
Flexible networking options support all server form factors and vendors, including
options for integrated Ethernet and Fibre Channel switches for Dell, IBM, and HP
blade servers. These options provide a consistent set of services across the data
center to reduce operational complexity.
The Cisco Unified Computing System (Cisco UCS) solution unifies network,
compute, and virtualization resources into a single system that delivers end-to-end
optimization for virtualized environments, while retaining the ability to support
traditional operating system and application stacks in physical environments.
1-11
1-12
Unified Fabric
There are two primary approaches to deploying a unified data center fabric: Fibre
Channel over Ethernet (FCoE) and Internet Small Computer Systems Interface
(iSCSI). Both are supported on the unified fabric, which provides a reliable 10
Gigabit Ethernet foundation.
Unified fabric lossless operation also improves the performance of iSCSI that is
supported by both Cisco Catalyst and Cisco Nexus switches. In addition, the Cisco
MDS series of storage switches has hardware and software features that are
specifically designed to support iSCSI.
The Cisco Nexus Series of switches was designed to support unified fabric. The
Cisco Nexus 7000 and Cisco Nexus 5000 Series Switches support Data Center
Bridging (DCB) and FCoE, with support for FCoE on the Cisco MDS Series of
switches as well.
Special host adapters, called converged network adapters (CNAs), are required to
support FCoE. Hardware adapters are available from vendors such as Emulex and
QLogic, while a software stack is available for certain Intel 10 Gigabit Ethernet
network interfaces.
Unified computing: The Cisco UCS platform is a next-generation data center platform
that does the following:
Increases IT staff productivity and business agility through rapid provisioning and
mobility support
Consolidated Connectivity
(FCoE)
FCoE
FC
VSAN
VLAN
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-12
1-13
Access
Policy-Based Connectivity
Aggregation
Core
High-Speed Switching
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-14
The hierarchical network model provides a modular framework that allows flexibility in
network design and facilitates ease of implementation and troubleshooting. The hierarchical
model divides networks or their modular blocks into three layers: the access, aggregation (or
distribution), and core layers. These layers consist of the following features:
1-14
Access layer: A layer that is used to grant user access to network devices. In a network
campus, the access layer generally incorporates switched LAN devices with ports that
provide connectivity to workstations, IP phones, and servers. In the WAN environment, the
access layer for teleworkers or remote sites may provide access to the corporate network
across WAN technology. In the data center, the access layer provides connectivity for
servers.
Aggregation (or distribution) layer: A layer that aggregates the wiring closets, using
switches to segment workgroups and isolate network problems in a data center
environment. Similarly, in the campus environment, the aggregation layer aggregates WAN
connections at the edge of the campus and provides policy-based connectivity.
Core layer (also referred to as the backbone): A high-speed backbone that is designed to
switch packets as fast as possible. Because the core layer is critical for connectivity, it must
provide a high level of availability and must adapt to changes very quickly. The core layer
also provides scalability and fast convergence.
Campus Core
Layer 2 Clustering
and NIC Teaming
Blade Chassis
with Pass-Through
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Blade Chassis
with Integrated
Switch
Mainframe
with OSA
Layer 3
Access
DCICTv1.01-15
Implementing a data center core is a best practice for large data centers. When the core is
implemented in an initial data center design, it helps ease network expansion and avoid
disruption to the data center environment. The following drivers are used to determine if a core
solution is appropriate:
40 and 100 Gigabit Ethernet density: Are there requirements for higher bandwidth
connectivity such as 40 or 100 Gigabit Ethernet? With the introduction of the Cisco Nexus
7000 M-2 Series of modules, the Cisco Nexus 7000 Series Switches can now support much
higher bandwidth densities.
10 Gigabit Ethernet density: Without a data center core, will there be enough 10 Gigabit
Ethernet ports on the campus core switch pair to support both the campus distribution and
the data center aggregation modules?
Administrative domains and policies: Separate cores help to isolate campus distribution
layers from data center aggregation layers for troubleshooting, administration, and
implementation of policies (such as quality of service [QoS], ACLs, troubleshooting, and
maintenance).
Anticipation of future development: The impact that may result from implementing a
separate data center core layer at a later date may make it worthwhile to install the core
layer at the beginning.
The data center typically connects to the campus core using Layer 3 links. The data center
network is summarized, and the core injects a default route into the data center network.
Key core characteristics include the following:
Low-latency switching
1-15
Campus Core
Service
Modules
Layer 2 Clustering
and NIC Teaming
Blade Chassis
with Pass-Through
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Blade Chassis
with Integrated
Switch
Mainframe
with OSA
Layer 3
Access
DCICTv1.01-16
The aggregation (or distribution) layer aggregates the uplinks from the access layer to the data
center core. This layer is the critical point for control and application services. Security and
application service devices (such as load-balancing devices, SSL offloading devices, firewalls,
and IPS devices) are often deployed as a module in the aggregation layer. This design lowers
TCO and reduces complexity by reducing the number of components that you need to
configure and manage.
Note
Service devices that are deployed at the aggregation layer are shared among all the
servers. Service devices that are deployed at the access layer provide benefit only to the
servers that are directly attached to the specific access switch.
The aggregation layer typically provides Layer 3 connectivity from the data center to the core.
Depending on the requirements, the boundary between Layer 2 and Layer 3 at the aggregation
layer can be in the multilayer switches, the firewalls, or the content switching devices.
Depending on the data center applications, the aggregation layer may also need to support a
large Spanning Tree Protocol (STP) processing load.
Note
OSA is an Open Systems Adapter. The OSA is a network controller that can be installed in a
Mainframe I/O cage. It integrates several hardware features and supports many networking
transport protocols. The OSA card is the communications device for the mainframe
architecture.
1-16
Campus Core
Layer 2 Clustering
and NIC Teaming
Blade Chassis
with Pass-Through
Blade Chassis
with Integrated
Switch
Mainframe
with OSA
Layer 3
Access
DCICTv1.01-17
The data center access layer provides Layer 2, Layer 3, and mainframe connectivity. The
design of the access layer varies, depending on whether you use Layer 2 or Layer 3 access. The
access layer in the data center is typically built at Layer 2, which allows better sharing of
service devices across multiple servers. This design also allows the use of Layer 2 clustering,
which requires the servers to be Layer 2-adjacent. With Layer 2 access, the default gateway for
the servers can be configured at the access layer or the aggregation layer.
With a dual-homing network interface card (NIC), you need a VLAN or trunk between the two
access switches. The VLAN or trunk supports the single IP address on the two server links to
two separate switches. The default gateway would be implemented at the access layer as well.
Although Layer 2 at the aggregation layer is tolerated for traditional designs, new designs try to
confine Layer 2 to the access layer. With Layer 2 at the aggregation layer, there are physical
loops in the topology that must be managed by the STP. Rapid Per VLAN Spanning Tree Plus
(Rapid PVST+) is a recommended best practice to ensure a logically loop-free topology over
the physical topology.
A mix of both Layer 2 and Layer 3 access models permits a flexible solution and allows
application environments to be optimally positioned.
1-17
Access
Layer 2
Core or
Distribution
Layer 3
Distribution or
Core
Layer 2
Access
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-19
1-18
Step 1
Step 2
Step 3
Step 4
The packet is Layer 2-switched across the access LAN to the destination host.
Layer 2
Access
Core or
Distribution
Layer 3
Distribution or
Core
Access
Layer 2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-20
The three-tier architecture that was previously defined is traditionally used for a larger
enterprise campus. For the campus of a small- or medium-sized business, the three-tier
architecture may be excessive. An alternative would be to use a two-tier architecture, in which
the distribution and the core layer are combined. With this option, the small- or medium-sized
business can reduce costs, because both distribution and core functions are performed by the
same switch. In the collapsed core the switches would provide direct connections for the
access-layer switches, server farm, and edge modules.
However, one disadvantage with two-tier architecture is scalability. As the small campus
begins to scale, a three-tier architecture should be considered.
Step 2
The aggregation (or distribution) or core switch (or both) performs Layer 3
switching.
Step 3
The receiving aggregation (or distribution) or core switch (or both) performs Layer 3
switching toward an access LAN.
Step 4
The packet is Layer 3-switched across the access LAN to the destination host.
1-19
Scalable
Resilient
Well structured
2012Ciscoand/oritsaffiliates.Allrightsreserved.
FC = Fibre
Channel
DCICTv1.01-22
The core-edge fabric topology has all the necessary features of a SAN in terms of resiliency
and performance. It provides resiliency and predictable recovery, scalable performance, and
scalable port density in a well-structured design. The architecture of the core-edge model has
two or more core switches in the center of the fabric. These core switches interconnect two or
more edge switches in the periphery of the fabric. The core itself can be a complex core
consisting of a ring or mesh of directors or a simple pair of separate core switches.
There are many choices in terms of which switch models can be used for the two different
layers. For example:
Core switches: Director-class switches, like the Cisco MDS 9500 Series switches, are
recommended at the core of the topology because of the excessive high availability
requirement.
Edge switches: You can use either use smaller fabric switches or more director-class
switches depending on the overall required fabric port density.
You have to make another choice about where to place the storage devices. In smaller coreedge port density solutions, the storage devices are commonly connected to the core. This
connection eliminates one hop in terms of the path from hosts to storage devices and also
eliminates one point of potential ISL congestion to access storage resources. However, with
larger port density fabrics that have more switches at the edge layer, storage devices themselves
might need to be deployed in the edge layer. It is important that if storage devices are deployed
across a common set of edge switches, these edge switches must be connected with a higher
ISL aggregate bandwidth. The reason the switches must be connected this way is that this
1-20
Scalable
Resilient
Well structured
Redundant
2012Ciscoand/oritsaffiliates.Allrightsreserved.
FC = Fibre
Channel
SAN-A
SAN-B
DCICTv1.01-23
The core-edge fabric topology has all the necessary features of a SAN in terms of resiliency
and performance, but it is still a single fabric. That is, all switches are connected together
through ISLs. A redundant core-edge design must employ two completely isolated fabrics,
which are shown here with red and blue links. Each server now has two separate Fibre Channel
ports, one connecting to each fabric, and the storage arrays are connected to both fabrics.
Most Fibre Channel switches can only support a single fabric, so redundant SAN designs
require separate physical switches. Cisco MDS 9000 Series Multilayer Switches support
VSANs or virtual fabrics, that allow you to configure separate logical fabrics on a smaller
number of physical switches.
The redundant SAN design uses multipath software on every server that identifies two separate
paths to the primary and secondary storage. The primary path might be active while the
secondary path is passive. If a failure occurs in the primary path, the multipath software
switches over to the secondary path. Most multipath software can concurrently use both the
primary path and secondary path and perform load balancing across both active paths.
This topology provides resiliency and predictable recovery, scalable performance, and scalable
port density in a well-structured SAN.
1-21
Advantages of core-edge:
Highest scalability
Scalable performance (core switches)
Nondisruptive scaling (edge switches)
Deterministic latency
Easy to analyze and tune performance
Cost-effective for large SANs
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-24
Scalable port counts: If more ports are needed, you can add more edge switches without
disrupting ongoing SAN traffic.
Scalable performance: If more performance is needed, you can add more core switches or
core ISLs.
FSPF: When there are at least two core switches, with each edge switch connected to at
least two core switches, there are two equal-cost paths from any edge switch to any other
edge switch. Therefore, Fabric Shortest Path First (FSPF) can use both data paths,
effectively doubling the ISL bandwidth.
Deterministic latency: With a single redundant pair of core switches, the fabric maintains
a two-hop count from any device to any other device even if one ISL goes down.
Easy to analyze and tune performance: The simple, symmetric design simplifies
throughput calculations.
Relatively cost-effective: Core edge fabrics use fewer ports for ISLs than in other designs
and provide similar levels of availability and performance. Smaller switches (or even hubs)
can be used at the edge, and larger switches or high-performance director-class switches
can be used at the core.
The core-edge design has only one notable consideration for a large SAN; it involves many
switches and interconnections. While the symmetrical nature of the core-edge design simplifies
performance analysis and tuning, there are still many switches to manage.
1-22
FC = Fibre
Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-26
The collapsed-core topology is a topology with the features of the core-edge topology but
delivers required port densities in a more efficient manner. The collapsed-core topology is
enabled by high port density that is offered by the Cisco MDS 9500 Series of director switches.
The salient features of the collapsed-core topology are as follows:
The resiliency of this topology is sufficiently high because of the redundant structure;
however, it does not have the excessive ISLs that a mesh topology may have.
The port density of this topology can scale quite high, but not as high as the core-edge
topology.
The main advantage of this topology is the degree of scalability that is offered at a very
efficient effective port usage. The collapsed-core design aims to offer very high port density
while eliminating a separate physical layer of switches and their associated ISLs.
The only disadvantage of the collapsed-core topology is its scalability limit relative to the coreedge topology. While the collapsed-core topology can scale quite large, the core-edge topology
should be used for the largest of fabrics. However, to continually scale the collapsed-core
design, one could convert the core to a core-edge design and add another layer of switches.
1-23
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-27
This architecture has the most efficient use of ports because no ports are consumed for
ISLs.
Ports can be scaled easily by adding hot-swappable blades, without disrupting traffic.
Single management interface for all ports simplifies performance analysis and tuning.
Fewer switches to manage.
1-24
A collapsed-core fabric runs into scalability limitations at very large port counts because of
the increasing number of ISLs that are required. However, with large director switches like
the Cisco MDS 9513 Multilayer Director switch, a collapsed-core fabric can easily scale to
meet the demands of all but the largest SAN fabrics.
Customers might not want to locate the entire fabric core in a single location. In a coreedge design, the core switches can be located on separate floors or even at separate
facilities for increased disaster tolerance.
Summary
This topic summarizes the key points that were discussed in this lesson.
Traditionally, LAN and SAN traffic has been segregated, making use of
its own infrastructures. This method has been primarily for security and
protection of data.
In any design, the network is broken down into functional layers, making
it easier to design, build, manage, and troubleshoot the infrastructure.
Although there are three layers within the design, sometimes the core
and aggregation layers are collapsed into a single physical layer, but still
retain the two functional layers.
The SAN design is also broken down into layers, primarily a core and
edge layer.
For efficiency of port utilization, the core and edge layer can be
collapsed into fewer physical switches, while still retaining the logical
separation of the layers themselves.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-28
1-25
1-26
Lesson 2
Objectives
Upon completing this lesson, you will be able to describe the features of Cisco Nexus 7000
Series, 5000 Series, and 2000 Series Fabric Extenders products and their relationship to the
layered design model. You will be able to meet these objectives:
Describe data center transformation with Cisco Nexus products
Describe the capabilities of the Cisco Nexus 7000 Series Supervisor Module
Describe the licensing options for the Cisco Nexus 7000 Series Switches
Describe the capacities of the Cisco Nexus 7000 Series Fabric Modules
Describe the capabilities of the Cisco Nexus 7000 Series I/O modules
Describe the capabilities of the Cisco Nexus 7000 Series power supply modules
Describe the expansion modules that are available for Cisco Nexus 5010 and 5020
Switches
Describe the Cisco Nexus 5548P, 5548UP, and 5596UP Switches
Describe the expansion modules that are available for Cisco Nexus 5548P, 5548UP, and
5596UP Switches
Describe the software licensing for the Cisco Nexus 5000 Series Switches
Describe how the Cisco Nexus 2000 Series Fabric Extenders act as remote line modules
Describe the features of the Cisco Nexus 2000 Series Fabric Extenders
A primary part of the unified fabric solution of the Cisco data center
architectural framework
End-to-end solution for data center core, aggregation, and high-density
end-of-row and top-of-rack server connectivity
High-density 10 Gigabit Ethernet
100 Gigabit Ethernet and 40 Gigabit Ethernet modules
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-5
1-28
The need for a higher level of reliability, with minimized downtime for updates and
configuration changes. Once a consolidated architecture is built, it is critical to keep it
operating with minimal disruption.
The need to optimize the use of the data center network infrastructure by moving toward a
topology where no link is kept idle. Traditional topologies that are based on Spanning Tree
Protocol (STP) are known to be inefficient because of STP blocking links or because of
active and standby network interface card (NIC) teaming. This need is addressed by Layer
2 multipathing (L2M) technologies such as virtual port channels (vPCs).
The need to optimize computing resources by reducing the rate of growth of physical
computing nodes. This need is addressed by server virtualization.
The need to reduce the time that it takes to provision new servers. This need is addressed
by the ability to configure server profiles, which can be easily applied to hardware.
The need to reduce overall power consumption in the data center. This need can be
addressed with various technologies. These technologies include unified fabric (which
reduces the number of adapters on a given server), server virtualization, and more powerefficient hardware.
The need to increase computing power at a lower cost. More and higher-performance
computing clouds are being built to provide a competitive edge to various enterprises.
The needs that are influencing data center design call for capabilities like the following:
Architectures capable of supporting a SAN and a LAN on the same network (for power use
reduction and server consolidation).
Architectures that provide an intrinsic lower latency than traditional LAN networks. This
architecture means that a computing cloud can be built on the same LAN infrastructure as
regular transactional applications.
Simplified cabling, which provides more efficient airflow, lower power consumption, and
lower cost of deployment of high-bandwidth networks.
Reduction of management points, which limits the impact of the sprawl of switching points
(software switches in the servers, multiple blade switches, and so on).
18.7 Tb/s
9.9 Tb/s
1.92 Tb/s
Nexus 7018
1.2 Tb/s
Nexus 5596UP
Nexus 7010
1 Tb/s
400 Gb/s
8.8 Tb/s
Nexus 3000
Nexus 5020
960 Gb/s
Nexus 4000
Nexus 7009
Nexus 1010
Nexus 5548UP
520 Gb/s
Nexus 2000
Nexus 1000V
Nexus 5010
Cisco NX-OS
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-6
Cisco Nexus 1000V Series Switches: A virtual machine (VM) access switch that is an
intelligent software switch implementation for VMware vSphere environments running the
Cisco Nexus Operating System (Cisco NX-OS) Software. The Cisco Nexus 1000V Series
Switches operate inside the VMware ESX hypervisor, and support the Cisco Virtual
Network Link (Cisco VN-Link) server virtualization technology to provide the following:
Policy-based VM connectivity
Cisco Nexus 1010 Virtual Services Appliance: This appliance is a member of the Cisco
Nexus 1000V Series Switches and hosts the Cisco Nexus 1000V Virtual Supervisor
Module (VSM). It also supports the Cisco Nexus 1000V Network Analysis Module (NAM)
Virtual Service Blade and provides a comprehensive solution for virtual access switching.
The Cisco Nexus 1010 provides dedicated hardware for the VSM, making access switch
deployment much easier for the network administrator.
1-29
1-30
Cisco Nexus 2000 Series Fabric Extenders: A category of data center products that are
designed to simplify data center access architecture and operations. The Cisco Nexus 2000
Series Fabric Extenders use the Cisco Fabric Extender Link (Cisco FEX-link) architecture
to provide a highly scalable unified server-access platform across a range of 100-Mb/s
Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, unified fabric, copper and fibre
connectivity, and rack and blade server environments. The Cisco Nexus 2000 Series Fabric
Extenders act as remote line cards for the Cisco Nexus 5000 and 7000 Series Switches.
Some of the models included in the Cisco Nexus 2000 Series Fabric Extenders include the
Cisco Nexus 2148T, 2224TP GE, 2248TP GE, and 2232PP 10GE Fabric Extenders that are
noted in the figure.
Cisco Nexus 3000 Series Switches: The Cisco Nexus 3000 Series Switches extend the
comprehensive, proven innovations of the Cisco Data Center Business Advantage
architecture into the High Frequency Trading (HFT) market. The products in this range are
the Cisco Nexus 3064, 3048 and 3016 Switches. The Cisco Nexus 3064 Switch supports 48
fixed 1 and 10 Gb/s enhanced small form factor pluggable (SFP+) ports and four fixed
quad SFP+ (QSFP+) ports, which allow a smooth transition from 10 Gigabit Ethernet to 40
Gigabit Ethernet. The Cisco Nexus 3000 switches are well suited for financial colocation
deployments, delivering features such as latency of less than a microsecond, line-rate
Layers 2 and 3 unicast and multicast switching, and the support for 40 Gigabit Ethernet
standards technologies.
Cisco Nexus 4000 Series Blade Switches: The Cisco Nexus 4000 switch module is a blade
switch solution for the IBM BladeCenter H and HT chassis. It provides the server I/O that
is required for high-performance, scale-out, virtualized and nonvirtualized x86 computing
architectures.
Cisco Nexus 5000 Series Switches including the Cisco Nexus 5000 Platform and 5500
Platform switches: A family of line-rate, low-latency, lossless 10 Gigabit Ethernet, and
Fibre Channel over Ethernet (FCoE) switches for data center applications. The Cisco
Nexus 5000 Series Switches are designed for data centers that are transitioning to 10
Gigabit Ethernet as well as data centers that are ready to deploy a unified fabric that can
manage LAN, SAN, and server clusters. This capability provides networking over a single
link, with dual links used for redundancy. Some of the switches included in this series are
the Cisco Nexus 5010, the Cisco Nexus 5020, the Cisco Nexus 5548UP and 5548P, and the
Cisco Nexus 5596UP and 5596T.
Cisco Nexus 7000 Series Switches: A modular data center-class switch that is designed for
highly scalable 10 Gigabit Ethernet networks with a fabric architecture that scales beyond
15 terabits per second (Tb/s). The switch is designed to deliver continuous system
operation and virtualized services. The Cisco Nexus 7000 Series Switches incorporate
significant enhancements in design, power, airflow, cooling, and cabling. The 10-slot
chassis has front-to-back airflow, making it a good solution for hot aisle and cold aisle
deployments. The 9- and 18-slot chassis use side-to-side airflow to deliver high density in a
compact form factor. The chassis in this series include Cisco Nexus 7000 9-Slot, 10-Slot,
and 18-Slot Switch chassis, sometimes referred to as Cisco Nexus 7009, 7010, and 7018
chassis as seen in the figure.
Nexus 7010
Nexus 7018
Shipping
Shipping
Shipping
Slots
7 I/O + 2 sup
8 I/O + 2 sup
16 I/O + 2 sup
Height
14 RU
21 RU
25 RU
BW / Slot Fab 1
N/A
BW / Slot Fab 2
DCICTv1.01-8
The Cisco Nexus 7000 Series Switches offer a modular data center-class product that is
designed for highly scalable 10 Gigabit Ethernet networks with a fabric architecture that scales
beyond 15 Tb/s. The Cisco Nexus 7000 Series provides integrated resilience that is combined
with features that are optimized specifically for the data center for availability, reliability,
scalability, and ease of management.
The Cisco Nexus 7000 Series Switches run the Cisco NX-OS Software to deliver a rich set of
features with nonstop operation.
This series features front-to-back airflow with 10 front-accessed vertical module slots and
an integrated cable management system facilitates installation, operation, and cooling in
both new and existing facilities.
Designed for reliability and maximum availability, all interface and supervisor modules are
accessible from the front. Redundant power supplies, fan trays, and fabric modules are
accessible completely from the rear to ensure that cabling is not disrupted during
maintenance.
1-31
The system uses dual dedicated supervisor modules and fully distributed fabric
architecture. There are five rear-mounted fabric modules in the 10- and 18-slot models and
five front-mounted fabric modules in the 9-slot model. Combined with the chassis
midplane they can deliver up to 230 Gb/s per slot for 4.1 Tb/s of forwarding capacity in the
10-slot form factor, and 7.8 Tb/s in the 18-slot form factor using the Cisco Nexus 7000
Series Fabric-1 Modules. The 9-slot form factor requires Cisco Nexus 7000 Series Fabric-2
Modules. Migrating to the Fabric-2 Module increases the bandwidth per slot to 550 Gb/s.
This migration to the Fabric-2 Module increases the forwarding capacity on the 10-slot
form factor to 9.9-Tb/s. On the 18-slot form factor, forwarding capacity is increased to
18.7-Tb/s, and 8.8-Tb/s for the 9-slot form factor.
The midplane design supports flexible technology upgrades as your needs change and
provides ongoing investment protection.
Front
Rear
Supervisor
Slots (12)
Summary
LEDs
Optional
Front Doors
Side-to-Side
Airflow
Locking
Ejector
Levers
Crossbar
Fabric
Modules
I/O Slots
(39)
Integrated Cable
Management
N7K-C7009
Power Supplies
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Fan Tray
DCICTv1.01-9
1-32
The Cisco Nexus 7000 Series 9-Slot Switch chassis, with up to seven I/O module slots,
supports up to 336 1 and 10 Gigabit Ethernet ports.
Airflow is side-to-side.
The integrated cable management system is designed to support the cabling requirements of
a fully configured system to either or both sides of the switch, allowing maximum
flexibility. All system components can easily be removed with the cabling in place,
providing ease of use for maintenance tasks with minimal disruption.
A series of LEDs at the top of the chassis provides a clear summary of the status of the
major system components. The LEDs alert operators to the need to conduct further
investigation. These LEDs report the power supply, fan, fabric, supervisor, and I/O module
status.
The purpose-built optional front module door provides protection from accidental
interference with both the cabling and modules that are installed in the system. The
transparent front door allows easy observation of cabling and module indicators and status
lights without any need to open the doors. The door supports a dual-opening capability for
flexible operation and cable installation while fitted. The door can be completely removed
for both initial cabling and day-to-day management of the system.
Independent variable-speed system and fabric fans provide efficient cooling capacity to the
entire system. Fan tray redundancy features help ensure reliability of the system and
support for hot-swapping of fan trays.
The crossbar fabric modules are located in the front of the chassis, with support for two
supervisors.
System Status
LEDs
Front-to-Back
Airflow
ID LEDs on
All FRUs
Integrated Cable
Management
With Cover
Air Exhaust
Optional
Locking Front
Doors
Locking Ejector
Levers
21 RU
Two Chassis
per 7 Rack
Supervisor
Slots (56)
Crossbar Fabric
Modules
I/O Module Slots
(14, 710)
Power Supplies
Front
N7K-C7010
Common Equipment
Removes from Rear
Rear
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-10
The Cisco Nexus 7000 Series 10-Slot Switch chassis with up to eight I/O module slots
supports up to 384 1 and 10 Gigabit Ethernet ports, meeting the demands of large
deployments.
Front-to-back airflow helps ensure that use of the Cisco Nexus 7000 Series 10-Slot Switch
chassis addresses the requirement for hot-aisle and cold-aisle deployments without
additional complexity.
The system uses dual system and fabric fan trays for cooling. Each fan tray is redundant
and composed of independent variable-speed fans that automatically adjust to the ambient
temperature. This adjustment helps reduce power consumption in well-managed facilities
while providing optimum operation of the switch. The system design increases cooling
efficiency and provides redundancy capabilities, allowing hot-swapping without affecting
the system. If either a single fan or a complete fan tray fails, the system continues to
operate without a significant degradation in cooling capacity.
1-33
The integrated cable management system is designed for fully configured systems. The
system allows cabling either to a single side or to both sides for maximum flexibility
without obstructing any important components. This flexibility eases maintenance even
when the system is fully cabled.
The system supports an optional air filter to help ensure clean airflow through the system.
The addition of the air filter satisfies Network Equipment Building Standards (NEBS)
requirements.
A series of LEDs at the top of the chassis provides a clear summary of the status of the
major system components. The LEDs alert operators to the need to conduct further
investigation. These LEDs report the power supply, fan, fabric, supervisor, and I/O module
status.
The cable management cover and optional front module doors provide protection from
accidental interference with both the cabling and modules that are installed in the system.
The transparent front door allows observation of cabling and module indicator and status
lights.
System
Fan Trays
System Status
LEDs
Integrated Cable
Management
Optional Front
Door
Side-to-Side
Airflow
Supervisor
Slots (910)
CrossBar
Fabric
Modules
25 RU
Common Equipment
Removes From Rear
Power Supplies
Power Supply
Air Intake
Front
N7K-C7018
Rear
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-11
1-34
The Cisco Nexus 7000 Series 18-Slot Switch chassis with up to 16 I/O module slots
supports up to 768 1 and 10 Gigabit Ethernet ports, meeting the demands of the largest
deployments.
Side-to-side airflow increases the system density within a 25 rack unit (25-RU) footprint,
optimizing the use of rack space. The optimized density provides more than 16 RU of free
space in a standard 42-RU rack for cable management and patching systems.
The integrated cable management system is designed to support the cabling requirements of
a fully configured system to either or both sides of the switch, allowing maximum
flexibility. All system components can easily be removed with the cabling in place,
providing ease of maintenance tasks with minimal disruption.
A series of LEDs at the top of the chassis provides a clear summary of the status of the
major system components. The LEDs alert operators to the need to conduct further
investigation. These LEDs report the power supply, fan, fabric, supervisor, and I/O module
status.
The purpose-built optional front module door provides protection from accidental
interference with both the cabling and modules that are installed in the system. The
transparent front door allows easy observation of cabling and module indicators and status
lights without any need to open the doors. The door supports a dual-opening capability for
flexible operation and cable installation while fitted. The door can be completely removed
for both initial cabling and day-to-day management of the system.
Independent variable-speed system and fabric fans provide efficient cooling capacity to the
entire system. Fan tray redundancy features help ensure reliability of the system and
support for hot-swapping of fan trays.
ISSU
VDCs
Process modularity
Process survivability
Control plane and data plane
separation
Online diagnostics
DOS-resilient
RBAC
Call Home
DCICTv1.01-12
The Cisco Nexus 7000 Series Switches are modular in design with emphasis on redundant
critical components throughout the subsystems. This modular approach has been applied across
the physical, environmental, power, and system software aspects of the chassis architecture.
Supervisor module redundancy: Active and standby operation with state and
configuration synchronization between the two supervisors. Provides seamless and Stateful
Switchover (SSO) in the event of a supervisor module failure.
Switch fabric redundancy: A single chassis can be configured with one or more fabric
modules up to a maximum of five modules, providing capacity as well as redundancy.
Failure of a switch fabric module triggers an automatic reallocation and balancing of traffic
across the remaining active switch fabric modules.
Cooling subsystem: Two redundant system fan trays for I/O module cooling and two
additional redundant fan trays for switch fabric module cooling. All fan trays are hotswappable.
1-35
Power subsystem availability features: Three internally redundant power supplies. Each
is composed of two internalized, isolated power units providing two power paths per
modular power supply, and six paths total per chassis when fully populated.
Cisco NX-OS Software redundancy options and features: The Cisco NX-OS Software
compartmentalizes components for redundancy, fault isolation, and resource efficiency.
Functional feature components operate as independent processes referred to as services,
with availability features implemented into each service. Most system services are capable
of performing stateful restarts that are transparent to other services within the platform and
neighboring devices within the network.
Service modularity: Services within the Cisco NX-OS Software are designed as nonkernel
space processes. They perform a function or set of functions for a subsystem or feature set,
with each service instance running as a separate independent protected process. This
modular architecture permits the system to provide a high level of protection and fault
tolerance for all services running on the platform.
Modular software upgrades: These upgrades address specific issues while minimizing the
impact to other critical services and the system overall.
Rapid restart: The services in the Cisco NX-OS Software can be restarted automatically
by the system manager in the event of critical fault detection. Notification of restart and
restart both happen in milliseconds.
Cisco NX-OS In-Service Software Upgrade (ISSU): The modular software architecture
of the Cisco NX-OS Software supports plug-in-based services and features. This
architecture makes it possible to perform complete image upgrades nondisruptively without
impacting the data-forwarding plane.
Virtual device contexts (VDCs): The Cisco NX-OS Software implements a logical
virtualization at the device level. Logical virtualization allows multiple instances of the
device to operate on the same physical switch simultaneously. These logical operating
environments are known as VDCs.
Note
1-36
Currently, the clock modules are not used, and there is no systemwide clock.
Interfaces
- Supervisor management port: 10/100/1000-Mb/s Ethernet port, support for inline
encryption through MAC security (IEEE 802.1AE)
- CMP management port: 10/100/1000-Mb/s Ethernet port
- Console serial port: RJ-45 connector
- Auxiliary serial port: RJ-45 connector
- Three USB ports: two host and one device port for peripheral devices
Memory
- DRAM: 8 GB
- Flash memory: 2 GB
- NVRAM: 2-MB battery backup
N7K-SUP1
AUX
Beacon LED
Console
Management
Ethernet
CompactFlash
2012Ciscoand/oritsaffiliates.Allrightsreserved.
USB Ports
CMP Ethernet
DCICTv1.01-14
The Cisco Nexus 7000 Series Supervisor Module is designed to deliver scalable control plane
and management functions for the Cisco Nexus 7000 Series chassis. It is based on a dual-core
processor that scales the control plane by harnessing the flexibility and power of the dual cores.
The supervisors control the Layer 2 and Layer 3 services, redundancy capabilities,
configuration management, status monitoring, power and environmental management, and
more. Supervisors provide centralized arbitration to the system fabric for all line cards. The
fully distributed forwarding architecture allows the supervisor to support transparent upgrades
to higher forwarding capacity-capable I/O and fabric modules. The supervisor incorporates an
innovative dedicated connectivity management processor (CMP) to support remote
management and troubleshooting of the complete system. Two supervisors are required for a
fully redundant system. One supervisor module runs as the active device while the other is in
hot-standby mode. This redundancy provides exceptional high-availability features in data
center-class products.
To deliver a comprehensive set of features, the Cisco Nexus 7000 Series Supervisor Module
offers the following:
Integrated diagnostics and protocol decoding with an embedded control plane packet
analyzer
1-37
Upgradable architecture
Fully decoupled control plane and data plane with no hardware forwarding on the
module
Supervisor CMP
The CMP provides a complete OOB management and monitoring capability independent from
the primary operating system. The CMP enables lights out remote monitoring and
management of the supervisor module, all modules, and the Cisco Nexus 7000 Series system
without the need for separate terminal servers with the associated additional complexity and
cost. The CMP delivers the remote control through its own dedicated processor, memory, and
bootflash memory and a separate Ethernet management port. The CMP can reset all system
components, including power supplies. It can also reset the host supervisor module to which it
is attached, allowing a complete system restart.
1-38
Scalable Services
MPLS
Transport Services
Enterprise LAN
OSPFv2 and v3
IS-IS (IPv4)
BGP (IPv4 and IPv6)
EIGRP (IPv4 and IPv6)
IP Multicast
PIM, PIM-SIM, BIDIR, ASM, and SSM modes (IPv4
and IPv6)
MSDP (IPv4)
PBR (IPv4 and IPv6)
GRE tunnels
Advanced Enterprise
VDCs
Cisco TrustSec
Enhanced Layer 2
Cisco FabricPath
PONG (network
testing utility)
Scalable Feature
XL capabilities on all
XL-capable line
modules
FCoE
FCoE license on a permodule basis
FCoE functions on the
F1-series I/O modules
Use of a storage VDC
without the requirement
for the Advanced
Enterprise License
Storage Enterprise
Inter-VSAN Routing
(IVR)
Advanced security
features such as VSANbased access controls and
fabric bindings
Base: vPC, Port Profile, WCCP, Port Security, GOLD, EEM, TACACS, LACP, ACL, QoS,
STP, STP Guards, UDLD, Cisco Discovery Protocol, CoPP, uRPF, IP Source Guard, DHCP Snooping,
CMP, ISSU, SSO, Dynamic ARP Inspection, Smart Call Home, SNMP, 802.1X, SPAN
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-16
The Cisco NX-OS Software for the Cisco Nexus 7000 Series Switches is available in
incremental license levels.
Base: A rich feature set is provided with the base software, which is bundled with the
hardware at no extra cost.
Enterprise LAN: The Enterprise LAN License enables incremental functions that are
applicable to many enterprise deployments, such as dynamic routing protocols (Open
Shortest Path First [OSPF], Enhanced Interior Gateway Routing Protocol [EIGRP];
Intermediate System-to Intermediate System [IS-IS], Border Gateway Protocol [BGP]), IP
multicast; Protocol Independent Multicast (PIM), PIM sparse mode (PIM-SM),
Bidirectional PIM (BIDIR-PIM), Advanced Services Module (ASM), and Source Specific
Multicast (SSM) modes; Multicast Source Discovery Protocol (MSDP), policy-based
routing (PBR), and Generic Routing Encapsulation (GRE).
Advanced LAN Enterprise: The Advanced LAN Enterprise License enables nextgeneration functions such as VDCs and the Cisco TrustSec solution.
Enhanced Layer 2: The Enhanced Layer 2 License enables Cisco FabricPath, the latest
Cisco technology to massively scale Layer 2 data centers.
1-39
Scalable Services: The Scalable Services License is applied on a per-chassis basis and
enables XL capabilities on the line cards. A single license per system enables all XLcapable I/O modules to operate in XL mode. After the single system license is added to a
system, all modules that are XL-capable are enabled with no additional licensing.
SAN Enterprise: This license enables Inter-VSAN Routing (IVR), and advanced security
features such as VSAN-based access controls, and fabric bindings for open systems.
FCoE License for the 32-port 10 Gigabit Ethernet: This license enables director-class
multihop FCoE implementation in a highly available modular switching platform for access
and core of converged network fabric. FCoE is supported on the Cisco Nexus 7000 F1Series I/O modules. This license also enables the use of a storage VDC for the FCoE traffic
within the Cisco Nexus 7000 Series, with requiring the enablement of the Advanced
package.
<xml...
licA ...>
License PAK
(Product Activation Key)
www.cisco.com
PAK +
Chassis Serial #
License
File
Ins Lic
Status Expiry Date Comments
Count
----------------------------------------------------------------------LAN_ADVANCED_SERVICES_PKG
Yes
In use Never
LAN_ENTERPRISE_SERVICES_PKG
Yes
Unused Never
-----------------------------------------------------------------------
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-17
Cisco NX-OS Software uses feature-based licenses that are enforced on the switch. They are
tied to the chassis serial number that is stored in dual redundant NVRAM modules on the
backplane.
Licenses are issued in the form of a digitally signed text file that is installed in the bootflash,
and run using the install license command.
Associated with a license is a grace period, during which a feature can be run without having a
license installed. The grace period permits the feature to be trial-tested before committing to
using that feature and purchasing the relevant license. Periodic syslog, Call Home, and Simple
Network Management Protocol (SNMP) traps issue warnings when the grace period is nearing
expiration. When the grace period ends, the associated features become unavailable for use
until a license is installed to activate them. The grace period is 120 days.
1-40
In addition, there are time-bound licenses that are currently used in an emergency. An example
would be when a grace period has expired and you need additional time to purchase the license
but do not want to lose the use of a feature. The expiration date on a time-bound license is
absolute and expires at midnight Coordinated Universal Time (UTC) on the set date. Periodic
syslog, Call Home, and SNMP trap warnings are issued when time-bound licenses near their
expiration date. When the time-bound expiration date is reached, the relevant features are no
longer available unless a license is installed or there is additional time in the grace period.
1-41
N7K-C7010-FAB-1
N7K-C7010-FAB-2
N7K-C7018-FAB-1
N7K-C7018-FAB-2
N7K-C7009-FAB-2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-19
The Cisco Nexus 7000 Series Fabric Modules for the Cisco Nexus 7000 Series chassis are
separate fabric modules that provide parallel fabric channels to each I/O and supervisor module
slot. Up to five simultaneously active fabric modules work together delivering up to 230 Gb/s
per slot (Fabric-1 Modules) or up to 550 Gb/s per slot (Fabric-2 Modules). The fabric module
provides the central switching element for the fully distributed forwarding on the I/O modules.
Switch fabric scalability is made possible through the support of one to five concurrently active
fabric modules for increased performance as your needs grow. All fabric modules are
connected to all module slots. The addition of each fabric module increases the bandwidth to all
module slots up to the system limit of five modules. The architecture supports lossless fabric
failover, with the remaining fabric modules load balancing the bandwidth to all the I/O module
slots, helping ensure graceful degradation.
The combination of a Cisco Nexus 7000 Fabric Module and the supervisor and I/O modules
supports virtual output queuing (VOQ) and credit-based arbitration to the crossbar switch to
increase performance of the distributed forwarding system. VOQ and credit-based arbitration
facilitate fair sharing of resources when a speed mismatch or contention for an uplink interface
exists. The fabric architecture also enables support for lossless Ethernet and unified I/O
capabilities.
The following should be considered for Cisco Nexus 7000 Fabric-2 Modules:
Cisco Nexus 7000 9-Slot Switch Chassis:
FAB-2 Module
DCICTv1.01-20
This figure describes the connectivity between the Cisco Nexus 7000 Series I/O and Supervisor
Modules and the Fabric Modules. Each I/O module has two 55-Gb/s traces to each fabric
module. Therefore, a fully loaded Cisco Nexus 7000 Series chassis provides 550 Gb/s of
switching capacity per I/O slot.
In addition, each supervisor module has a single 55-Gb/s trace to each fabric module. This
means that a fully loaded Cisco Nexus 7000 10-Slot Switch chassis provides 275 Gb/s of
switching capacity to each supervisor slot.
Note
Cisco Nexus 7000 Fabric-1 Modules provide two 23-Gb/s traces to each fabric module,
providing 230 Gb/s of switching capacity per I/O slot for a fully loaded chassis. Each
supervisor module has a single 23-Gb/s trace to each fabric module.
1-43
N7K-M108X2-12L
N7K-M132XP-12
N7K-M132XP-12L
N7K-M148GT-11
N7K-M148GS-11
N7K-M148GT-11L N7K-M148GS-11L
N7K-M108X2-12L
N7K-M132XP-12
N7K-M132XP-12L
N7K-M148GT-11
N7K-M148GT-11L
N7K-M148GS-11
N7K-M148GS-11L
Connectivity
8 ports of 10
Gigabit Ethernet
(using X2 )
32 Ports of 10
Gigabit Ethernet
(using SFP+)
48 ports of
10/100/1000-Mb/s
Ethernet
(using RJ-45)
48 ports of
Gigabit Ethernet
(using SFP optics)
Queues per
port
Rx: 8q2t
Tx: 1p7q4t
Rx: 8q2t
Tx: 1p7q4t
Rx: 2q4t
Tx: 1p3q4t
Rx: 2q4t
Tx: 1p3q4t
Performance
60 mpps L2/3
IPv4 unicast and
30 mpps IPv6
unicast
Switch fabric
interface
80 Gb/s
80 Gb/s
46 Gb/s
46 Gb/s
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-22
1-44
N7K-M206FQ-23L
N7K-M202CF-22L
N7K-M206FQ-23L
N7K-M202CF-22L
Connectivity
6 ports of 40 Gigabit
Ethernet (using QSFP+ )
Queues per
port
Rx: 8q2t
Tx: 1p7q4t
Rx: 8q2t
Tx: 1p7q4t
Performance
Switch fabric
interface
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-23
The Cisco Nexus 7000 M2-Series I/O modules are highly scalable, high-performance modules
offering outstanding flexibility, and full-featured, nonblocking performance on each port. The
Cisco Nexus 7000 M2-Series modules facilitate the deployment of high-density, highbandwidth, scalable network architectures, especially in large network cores and in service
provider and Internet peering environments.
The first two Cisco Nexus 7000 M2-Series I/O modules are the Cisco Nexus 7000 M2-Series 6Port 40 Gigabit Ethernet Module and the Cisco Nexus 7000 M2-Series 2-port 100 Gigabit
Ethernet module. These two modules will be discussed in the following two figures.
Both modules can use either the Cisco Fabric-1 Module or Fabric-2 Module.
1-45
N7K-F132XP-15
N7K-F248XP-25
N7K-F132XP-15
N7K-F248XP-25
Connectivity
Performance
Switch Fabric
Interface
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-24
The Cisco Nexus 7000 F2-Series Module can also be used with the Cisco Nexus 2000 Series
Fabric Extenders.
1-47
N7K-AC-6.0KW
Input
Output
Efficiency
Receptacle
N7K-AC-7.5KW
N7K-DC-6.0KW
110 / 220 V
208 240 V
DC -48 V
2.4 kW and 6 kW
7.5 kW
6 kW
92%
92%
91%
AC Power
Flexible
Power
Options
DC Power
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-26
Single Input: 220 volt (220 V), 3000 W output, 110 V, 1200 W output
1-48
Temperature sensor and instrumentation that shut down the power supply if the temperature
exceeds the thresholds, preventing damage due to overheating
Internal fault monitoring so that if a short circuit and component failure are detected, the
power supply unit can be shut down automatically
Intelligent remote management so that users can remotely power-cycle one or all power
supply modules using the supervisor command-line interface
Real-time power draw showing real-time actual power consumption (not available in the
initial software release)
Variable fan speed, allowing reduction in fan speed for lower power usage in wellcontrolled environments
1-49
Available Power
18 kW
220 V
Grid 1
220 V
Grid 2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-27
The power redundancy mode dictates how the system budgets its power. There are four userconfigurable power-redundancy modes:
Combined
Combined mode has no redundancy, with the power that is available to the system being the
sum of the power outputs of all the power supply modules in the chassis.
1-50
N+1 Redundancy
Available Power
12 kW
220 V
Grid 1
2012Ciscoand/oritsaffiliates.Allrightsreserved.
220 V
Grid 2
DCICTv1.01-28
N+1 redundancy is a feature that guards against failure of one of the power supply modules.
The power that is available to the system is the sum of the two least-rated power supply
modules.
Grid redundancy guards against failure of one input circuit (grid). For grid redundancy, each
input on the power supply is connected to an independent AC feed, and the power that is
available to the system is the minimum power from either of the input sources (grids).
1-51
Full Redundancy
Available Power
9 kW
220 V 220 V
Grid 1
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Grid 2
DCICTv1.01-30
Complete redundancy is the system default redundancy mode. This mode guards against failure
of either one power supply or one AC grid, and the power that is available is always the
minimum of input source and power supply redundancy.
1-52
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-31
The Cisco Power Calculator enables you to calculate the power supply requirements for a
specific configuration. The results will show output current, output power, and system heat
dissipation. The Cisco Power Calculator supports the Cisco Nexus 7000 Series Switches, Cisco
Catalyst 6500 Series Switches, Cisco Catalyst 4500-E Series chassis and 4500 Series Switches,
Cisco Catalyst 3750-E and 3750 Series Switches, Cisco Catalyst 3560-E and 3560 Series
Switches, Cisco Catalyst 2960 Series Switches, Cisco Catalyst 2975 Series Switches, Cisco
Catalyst Express 500 Series Switches, and the Cisco 7600 Series Routers.
Note
The calculator is a starting point in planning your power requirements; it does not provide a
final power recommendation.
The Cisco Power Calculator will guide you through a series of selections for configurable
products. If you need to change a previous selection, there is a Back button.
To launch the Cisco Power Calculator, go to the following URL and click the Launch Cisco
Power Calculator link: http://tools.cisco.com/cpc/.
Note
1-53
Nexus
5010
Nexus
5020
Nexus
5548
Nexus
5596
520 Gb/s
1.04 Tb/s
960 Gb/s
1.92 Tb/s
1RU
2RU
1RU
2RU
16
48*
96*
26
52
48
96
12
16
96
3.2us
3.2us
2.0us
2.0us
512
512
4096
4096
Port-to-Port Latency
No. of VLANs
Layer 3 Capability
1 Gigabit Ethernet Port Scalability
576
576
1152**
1152**
384
384
768**
768**
DCICTv1.01-33
The table in the figure describes the differences between the Cisco Nexus 5000 and 5500
Platform switches. The port counts are based on 24 Cisco Nexus 2000 Series Fabric Extenders
per Cisco Nexus 5500 switch.
1-54
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-35
The Cisco Nexus 5000 Series Switches are a family of line-rate, low-latency, cost-effective 10
Gigabit Ethernet switches that are designed for access-layer applications.
The following are key features of the Cisco Nexus 5000 Series Switches:
One expansion slot supporting any of the Generic Expansion Modules (GEM) for the Cisco
Nexus 5000 Series Switches
Low-latency switch
1-55
2 GEM slots
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-36
The Cisco Nexus 5020 Switch is a 2-RU, 56-port, Layer 2 switch that provides an Ethernetbased unified fabric. It delivers 1.04 Tb/s of nonblocking switching capacity with 40 fixed
wire-speed 10 Gigabit Ethernet ports that accept modules and cables meeting the SFP+ form
factor. All of the 10 Gigabit Ethernet ports support DCB and FCoE.
The following are key features of the Cisco Nexus 5020 switch:
1-56
Two expansion slots supporting any of the Generic Expansion Modules (GEM) for the
Cisco Nexus 5000 switch
Low-latency switch
Front-to-back airflow
Power supplies and fans serviced from front
N+1 redundancy for all front panel components
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-37
For Cisco Nexus 5000 Platform Switches (5010 and 5020), cooling is front-to-back, supporting
hot- and cold-aisle configurations that help increase cooling efficiency. All serviceable
components are accessible from the front panel, allowing the switch to be serviced while in
operation and without disturbing the network cabling.
The following are key features of the front panel:
1-57
Interfaces are on the back, aligned in the rack with server ports
Hot-swappable expansion modules
10/100/1000-Mb/s OOB management Ethernet port
1/10-Gb/s ports
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Expansion slots
DCICTv1.01-38
The network and management interfaces are located on the rear panel of the Cisco Nexus 5000
Platform Switches (5010 and 5020), which are aligned in the rack with server ports. When the
unit is rack-mounted, the interface connections on the front panel align with the server
connections in the rack to allow easy cabling runs from server to switch.
All the Cisco Nexus 5000 Series Switches have a bank of four management ports. These ports
include two internal cross-connect ports that are currently unused, the 10-, 100-, 1000-Mb/s
OOB management Ethernet port and the console port.
The Cisco Nexus 5010 Switch has 20 fixed 10-Gigabit Ethernet ports for server or network
connectivity. The first bank of ports is 1-Gigabit Ethernet-capable, while the remaining
Ethernet ports are 10 Gigabit Ethernet. There is one slot for a hot-swappable, optional
expansion module (GEM).
The Cisco Nexus 5020 Switch has 40 fixed 10-Gigabit Ethernet ports for server or network
connectivity. The first bank of ports is 1-Gigabit Ethernet-capable, while the remaining
Ethernet ports are 10 Gigabit Ethernet. There are two slots for hot-swappable, optional
expansion modules (GEM).
1-58
N5K-M1060
6 ports 1/2/4/8 Gb/s FC
N5K-M1008
8 ports 1/2/4 Gb/s FC
Expansion slots
Management
Ethernet
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Console
FC = Fibre Channel
DCICTv1.01-40
Expansion modules allow Cisco Nexus 5000 Series Switches to be configured as cost-effective
10 Gigabit Ethernet switches and as I/O consolidation platforms with native Fibre Channel
connectivity. There are currently four expansion module options that can be used to increase
the number of 10 Gigabit Ethernet, FCoE-capable ports, connect to Fibre Channel SANs, or do
both.
The Cisco Nexus 5010 Switch supports a single module, while the Cisco Nexus 5020 Switch
supports any combination of the following two modules. Two expansion module slots can be
configured to support up to 12 additional 10 Gigabit Ethernet, FCoE-capable ports. The
modules may also be configured with up to 16 Fibre Channel ports, or a combination of both in
a nonblocking, nonoversubscribed fashion. All modules are hot-swappable.
Eight-port 1/2/4-Gigabit Fibre Channel: A Fibre Channel module that provides eight
ports of 1-, 2-, or 4-Gb/s Fibre Channel through small form-factor pluggable (SFP) ports
for transparent connectivity with existing Fibre Channel networks. This module is ideal in
environments where storage I/O consolidation is the main focus.
Six-port 1/2/4/8-Gigabit Fibre Channel: A Fibre Channel module that provides six ports
of 1-, 2-, 4-, or 8-Gb/s Fibre Channel through SFP ports for higher speed or longer
distances over Fibre Channel. Requires Cisco NX-OS Software Release 4.1(3)N2 or later
release.
Four-port 10 Gigabit Ethernet (DCB and FCoE) and 4-port 1/2/4-Gigabit Fibre
Channel: A combination Fibre Channel and Ethernet module provides four 10 Gigabit
Ethernet/, FCoE-capable ports through SFP+ ports and four ports of 1-, 2-, or 4-Gb/s native
Fibre Channel connectivity through SFP ports.
1-59
Six-port 10 Gigabit Ethernet (DCB and FCoE): A 10 Gigabit Ethernet module provides
an additional six 10 Gigabit Ethernet, FCoE-capable SFP+ ports per module, helping the
switch support even denser server configurations.
Note
When you calculate port requirements for a design, note that the maximum number of 10Gigabit Ethernet ports that are available on a Cisco Nexus 5010 Switch is 26 and 52 on a
Cisco Nexus 5020 Switch.
1-60
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-42
The Cisco Nexus 5500 Platform switches are the second generation in the Cisco Nexus 5000
Series Switches, a series of line-rate, low-latency, cost-effective 10 Gigabit Ethernet switches.
The initial release of the Cisco Nexus 5500 Platform switches was the Cisco Nexus 5548P
Switch, which was followed by the release of the Cisco Nexus 5548UP Switch, supporting
unified ports.
The Cisco Nexus 5500 Platform switches are well suited for enterprise-class data center server
access-layer deployments and smaller-scale, midmarket data center aggregation deployments
across a diverse set of physical, virtual, storage access, and unified data center environments.
The Cisco Nexus 5500 Platform switches have the hardware capability to support Cisco
FabricPath and IETF Transparent Interconnection of Lots of Links (TRILL) to build scalable
and highly available Layer 2 networks.
The Cisco Nexus 5548 Switch is a 1-RU, 48-port Layer 3-capable switch that provides an
Ethernet-based unified fabric. It delivers 960 Gb/s of nonblocking switching capacity with 32
fixed wire-speed 1 and 10 Gigabit Ethernet ports. All of the 10 Gigabit Ethernet ports support
DCB and FCoE.
The switch has a single serial console port and a single OOB 10-, 100-, and 1000-Mb/s
Ethernet management port.
There is a single expansion slot that supports any Generic Expansion M
odule 2 (GEM2) series module.
The Cisco Nexus 5500 Platform switches can be used as a Layer 3 switch through the addition
of a Layer 3 daughter card routing module, enabling the deployment of Layer 3 services at the
access layer.
2012 Cisco Systems, Inc.
1-61
The Cisco Nexus 5500 Platform switches support Cisco FabricPath. Cisco FabricPath is a set of
multipath Ethernet technologies that combine the reliability and scalability benefits of Layer 3
routing with the flexibility of Layer 2 networks, enabling IT to build massively scalable data
centers. Cisco FabricPath offers a topology-based Layer 2 routing mechanism that provides an
Equal-Cost Multipath (ECMP) forwarding model. Cisco FabricPath implements an
enhancement that solves the MAC address table scalability problem that is characteristic of
switched Layer 2 networks. Furthermore, Cisco FabricPath supports enhanced virtual port
channel (vPC+), a technology that is similar to vPC that allows redundant interconnection of
the existing Ethernet infrastructure to Cisco FabricPath without using Spanning Tree Protocol
(STP).
The figure shows a Cisco Nexus 5548P Switch and a Cisco Nexus 5548UP Switch, each with a
GEM2 expansion module installed.
The Cisco Nexus 5500 Platform switches start with the Cisco NX-OS Software Release
5.0(2)N1(1).
Management
Ethernet
Console
Redundant
Fan Modules
Redundant
Power Supplies
USB Port
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-43
The front panel of the Cisco Nexus 5548 switches contains the management interfaces. There is
a bank of four management ports. The ports include two internal cross-connect ports that are
currently unused, the 10-, 100-, and 1000-Mb/s OOB management Ethernet port, and the
console port. In addition, a fully functional USB port is present, useful for transferring files to
or from the device bootflash, for backup, or other purposes.
Similar to the Cisco Nexus 5000 Platform switches, cooling is front-to-back, supporting hotand cold-aisle configurations that help increase cooling efficiency. All serviceable components
are accessible from the front panel, allowing the switch to be serviced while in operation and
without disturbing the network cabling.
The Cisco Nexus 5548 Switch (P and UP) front panel includes two, N+1 redundant, hotpluggable power supply modules and two, N+1 redundant, hot-pluggable fan modules for
highly reliable front-to-back cooling.
1-62
Two power supplies can be used for redundancy, but the switch is fully functional with one
power supply. The power supply has two LEDs, one for power status and one to indicate a
failure condition.
Note
It is not recommended that you leave a power supply slot empty. If you remove a power
supply, replace it with another one. If you do not have a replacement power supply, leave
the nonfunctioning one in place until you can replace it.
The Cisco Nexus 5548 Switch (P and UP) requires two fan modules. Each fan module has four
fans. If more than one fan fails in one of these modules, you must replace the module.
The fan module LED indicates the fan tray health. Green indicates normal operation, while
amber indicates a fan failure.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-44
The Cisco Nexus 5596UP Switch is a 2-RU, 96-port Layer 3-capable switch that provides an
Ethernet-based unified fabric. It delivers 1.92 Tb/s of nonblocking switching capacity with 48
fixed wire-speed 1 and 10 Gigabit Ethernet ports. All of the 10 Gigabit Ethernet ports are
unified ports. In the future, 40 Gigabit Ethernet uplinks will be supported.
The switch has a single serial console port and a single OOB 10-, 100-, and 1000-Mb/s
Ethernet management port.
There are three expansion slots that support any GEM2.
The Cisco Nexus 5500 Platform switches can be used as a Layer 3 switch through the addition
of a Layer 3 hot-swappable GEM2 routing module. This addition enables the deployment of
Layer 3 services at the access layer.
The Cisco Nexus 5500 Platform switches start with the Cisco NX-OS Software Release
5.0(2)N1(1).
1-63
The Cisco Nexus 5500 Platform switches include the following features:
1-64
1-. 2-, 4-, and 8-Gigabit Fibre Channel switch, T11 FCoE
4096 VLANs (some are reserved by Cisco NX-OS in software) (507 on the Cisco Nexus
5010 or 5020)
Support for Layer 3 switching (with future daughter card or GEM module) connected
Quality of service (QoS) and multicast enhancements (differentiated services code point
[DSCP] marking, more multicast queues, and so on)
Hardware support for IEEE 1588 (Precision Time Protocol [PTP], microsecond accuracy,
and time stamp)
Unified ports on all ports of the Cisco Nexus 5596UP and 5548UP Switches and via an
expansion module for the Cisco Nexus 5548P Switch
48 Cisco Nexus 5000 Series Switches port channels plus 384 Cisco Nexus 2000 Series
Fabric Extenders port channels and 768 vPCs
N55-M8P8FP8 GEM2
8 SFP+ 1 and 10 Gigabit Ethernet ports
8 SFP ports 1/2/4/8 Gigabit Fibre Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-46
The N55-M8P8FP GEM provides eight 1 or 10 Gigabit Ethernet and FCoE ports using the
SFP+ interface. The module also provides eight ports of 8-, 4-, 2-, or 1-Gb/s native Fibre
Channel connectivity using the SFP interface.
1-65
N55-M16P GEM2
16 SFP+ Ethernet ports, hardware capable of 1 and 10 Gigabit Ethernet
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-47
The N55-M16P GEM provides 16 1 and 10Gb/s ports using the SFP+ transceiver.
1-66
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-48
The N55-M16UP GEM provides 16 unified ports. A unified port is a single port that can be
configured as either an Ethernet or a native Fibre Channel port. In Ethernet operation, it
functions as a 1 or 10-Gb/s port, or in Fibre Channel operations as 8/4/2/1-Gb/s.
The N55-M16UP has the following characteristics:
16 unified ports
Uses existing Ethernet SFP+ and Cisco 8/4/2-Gb/s and 4/2/1Gb/s Fibre Channel optics
The unified port expansion module may be installed in any of the Cisco Nexus 5500 Platform
Switches chassis.
1-67
Layer 3
Daughter
Card
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-49
The N55-D160L3 is the Layer 3 daughter card for the Cisco Nexus 5548Switch. Layer 3
support is enabled via a field-upgradable routing card that may be installed while the switch
remains mounted in the rack.
It is recommended to power down the switch before installing the Layer 3 daughter card.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
1-68
DCICTv1.01-50
The N55-D160L3 is not a GEM. It is a field upgradable card that is a component of the
management complex of the switch. The following steps are required to upgrade to Layer 3
services.
Step 1
Step 2
Step 3
Unscrew and remove the I/O module (the management port complex).
Step 4
Insert and fasten the screws on the Layer 3 I/O daughter card.
Step 5
Step 6
Note
If two fan modules are removed, a major alarm will be generated. The system starts a 120second shutdown timer. If the module is reinserted within 120 seconds, the major alarm will
be cleared and the shutdown timer will be stopped.
It is recommended to power down the switch before installing the Layer 3 module. This
procedure affects service.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-51
The Layer 3 GEM2 (N55-M160L3) provides 160 Gb/s of Layer 3 services to the Cisco Nexus
5596UP Switch. This expansion module is a field-replaceable unit (FRU).
Each Layer 3 expansion module provides 160 Gb/s of Layer 3 services to the chassis, 10 Gb/s
across any 16 ports that are as configured by the administrator. The Cisco Nexus 5596UP
chassis has three available expansion slots; however, only one Layer 3 module is supported at
the initial release.
Each Layer 3 module that is installed reduces the overall system scalability, primarily regarding
the number of FEXs that are supported per switch. Up to 24 FEXs are supported per Cisco
Nexus 5548P, 5548UP, 5596UP Switches, but this number is reduced to eight FEXs in Layer 3
configurations. This configuration correspondingly reduces the available server ports.
2012 Cisco Systems, Inc.
1-69
License
Product Code
Features
N5010-SSK9
FC/FCoE/FCoE NPV
N5020-SSK9
FC/FCoE/FCoE NPV
N5010-FNPV-SSK9
FCoE NPV
N5020-FNPV-SSK9
FCoE NPV
N5548-FNPV-SSK9
FCoE NPV
N5596-FNPV-SSK9
FCoE NPV
N55-8P-SSK9
FC/FCoE/FCoE NPV
on any 8 ports on the
Cisco Nexus 5548 or
5596 Switch
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-53
Licensing for the Cisco Nexus 5000 Platform switches is tied to the physical chassis serial
number. The Cisco Nexus 5500 Platform licensing is port-based. Both platforms allow a grace
period or trial licenses to be used, sometimes referred to as honor-based licensing.
The following terminology is often used when describing Cisco NX-OS Software licensing:
1-70
Licensed feature: Permission to use a particular feature through a license file, a hardware
object, or a legal contract. This permission is limited to the number of users, number of
instances, time span, and the implemented switch.
License enforcement: A mechanism that prevents a feature from being used without first
obtaining a license.
Node-locked license: A license that can only be used on a particular switch using the
unique host ID of the switch.
Host ID: A unique chassis serial number that is specific to each switch.
Proof of purchase: A document entitling its rightful owner to use licensed features on one
switch as described in that document. The proof of purchase document is also known as the
claim certificate.
Product Authorization Key (PAK): The PAK allows you to obtain a license key from one
of the sites that is listed in the proof of purchase document. After registering at the
specified website, you will receive your license key file and installation instructions
through email.
License key file: A switch-specific unique file that specifies the licensed features. Each file
contains digital signatures to prevent tampering and modification. License keys are
required to use a licensed feature. License keys are enforced within a specified time span.
Missing license: If the bootflash has been corrupted or a supervisor module has been
replaced after you have installed a license, that license shows as missing. The feature still
works, but the license count is inaccurate. You should reinstall the license as soon as
possible.
Incremental license: An additional licensed feature that was not in the initial license file.
License keys are incremental. If you purchase some features now and others later, the
license file and the software detect the sum of all features for the specified switch.
Evaluation license: A temporary license. Evaluation licenses are time-bound (valid for a
specified number of days) and are not tied to a host ID (switch serial number).
Grace period: The amount of time the features in a license package can continue
functioning without a license.
You can either obtain a factory-installed license (only applies to new switch orders) or perform
a manual license installation of the license (applies to existing switches in your network).
The Base Services Package (N5000-AS) is included with the switch hardware at no additional
charge. It includes all available Ethernet and system features, except features that are explicitly
listed in the Basic Storage Services Package. The Basic Storage Services Package provides all
Fibre Channel functionality. That functionality includes FCoE, DCB, Fibre Channel SAN
services, Cisco N_Port Virtualizer (Cisco NPV), Fibre Channel Port Security, and Fabric
Binding.
Layer 3 support on the Cisco Nexus 5500 Platform switches requires additional field
upgradable hardware and a software license.
The Cisco Nexus 5000 Series Switches licensing is tied to the physical chassis serial number.
The Storage Protocol Services Package license is installed once per chassis on the Cisco Nexus
5000 Series Switches.
The licenses that are supported on the Cisco Nexus 5000 Series Switches are described in the
table in this figure and on the subsequent two figures.
1-71
License
Product Code
Features
N55-48P-SSK9
FC/FCoE/FCoE NPV on 48
ports on the Cisco Nexus 5500
Platform switches
N55-BAS1K9
N55-LAN1K9
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-54
This table is a continuation of the available licenses for the Cisco Nexus 5000 Series Switches.
License
Product Code
Features
N55-VMFEXK9
N5548-EL2-SSK9
N5596-EL2-SSK9
DCNM SAN
DCNM-SAN-N5K-K9
DCNM LAN
DCNM-L-NXACCK9
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-55
This table is a continuation of the available licenses for the Cisco Nexus 5000 Series Switches.
1-72
The Cisco Nexus 2000 Fabric Extenders serve as remote I/O modules
of a Cisco Nexus 5000 or 7000 Series switch:
- Managed and configured from the Cisco Nexus switch
Together, the Cisco Nexus switches and Cisco Nexus 2000 Fabric
Extenders combine benefits of ToR cabling with EoR management.
RackN
Rack1
..
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-57
Cisco Nexus 2000 Series Fabric Extenders can be deployed together with Cisco Nexus 5000 or
Cisco Nexus 7000 Series Switches to create a data center network that combines the advantages
of a top-of-rack (ToR) design with the advantages of an end-of-row (EoR) design.
Dual redundant Cisco Nexus 2000 Series Fabric Extenders are placed at the top of each rack.
The uplink ports on the Cisco Nexus 2000 Series Fabric Extenders are connected to a Cisco
Nexus 5000 or Cisco Nexus 7000 Series switch that is installed in the EoR position. From a
cabling standpoint, this design is a ToR design. The cabling between the servers and the Cisco
Nexus 2000 Series Fabric Extenders is contained within the rack. Only a limited number of
cables need to be run between the racks to support the 10 Gigabit Ethernet connections between
the Cisco Nexus 2000 Series Fabric Extenders and the Cisco Nexus switches in the EoR
position.
From a network deployment standpoint, however, this design is an EoR design. The fabric
extenders (FEXs) act as remote I/O modules for the Cisco Nexus switches, which means that
the ports on the Cisco Nexus 2000 Series Fabric Extenders act as ports on the associated
switch. In the logical network topology, the FEXs disappear from the picture, and all servers
appear as directly connected to the Cisco Nexus switch. From a network operations perspective,
this design has the simplicity that is normally associated with EoR designs. All the
configuration tasks for this type of data center design are performed on the EoR switches.
There are no configuration or software maintenance tasks that are associated with the FEXs
1-73
Active-Active
vPC
Cisco Nexus
7000/5000/5500
2012Ciscoand/oritsaffiliates.Allrightsreserved.
vPC
Cisco Nexus
5000/5500
DCICTv1.01-58
There are three deployment models that are used to deploy Cisco Nexus 2000 Series Fabric
Extenders together with the Cisco Nexus 5000 and Cisco Nexus 7000 Series Switches:
Straight-through, using dynamic pinning: This deployment model also uses the straightthrough connection model between the FEXs and the switches. However, there is no static
relation between the downlink server ports and the uplink ports. The port between the FEX
and the switch are bundled into a port channel and traffic is distributed across the uplinks
that are based on the PortChannel hashing mechanism.
Active-active FEX using vPC: In this deployment model, the FEX is dual-homed to two
Cisco Nexus switches. vPC is used on the link between the FEX and the pair of switches.
Traffic is forwarded between the FEX and the switches that are based on vPC forwarding
mechanisms.
Note
The Cisco Nexus 7000 Series Switches currently only support straight-through deployment
using dynamic pinning. Static pinning and active-active FEX are currently only supported on
the Cisco Nexus 5000 Series Switches.
1-74
Parent Switch
FEX
Supported
No.
FEX
12
N7K-M132XP-12
N7K-M132XP-12L
24
2224TP
2248TP
32
Optics/Transceivers
Supported for FEX
Passive CX-1 SFP+ (1/3/5m)
Active CX-1 SFP+ (7/10m)
SR SFP+ (MMF) OM3 300m
LR SFP+ (SMF) 300m (FCoE)
FET SFP+ (MMF) OM2 20m, OM3 100m
LRM SFP+ - Not Supported
Passive CX-1 SFP+ (1/3/5m)
Active CX-1 SFP+ (7/10m)
SR SFP+ (MMF) OM3 300m
LR SFP+ (SMF) 300m (FCoE)
FET SFP+ (MMF) OM2 20m, OM3 100m
LRM SFP+ - Not Supported
Passive CX-1 SFP+ (1/3/5m) - on M132XP12
Active CX-1 SFP+ (7m/10m)
SR SFP+ (MMF): OM1 26m OM3 300m
LR SFP+ (SMF): up to 10km
FET SFP+ (MMF): OM2 25m OM3 100m
LRM SFP+: Not Supported
2232PP
N7K-F248XP-25
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-59
The table shows the model and number of the Cisco Nexus 2000 Series Fabric Extenders that
are supported by each parent switch model.
All Cisco products that support passive Twinax cables must support up to a 16 foot (5 m)
distance. Beyond that length, Cisco supports only active Twinax cables.
Software support for active 23- and 33-foot (7- and 10-m) Twinax cables came with Cisco NXOS Software Release 4.2(1).
1-75
The parent switch and the connected FEXs form a virtual switch chassis.
Different models of FEXs can be connected to the same parent switch.
=
Up to 24
FEX 1
FEX 12
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-60
The figure explains the virtualized switch chassis from both the physical and logical view.
The Cisco Nexus 7000 or 5000 Series Switches and the Cisco Nexus 2000 Series Fabric
Extenders that are connected to them combine to form a scalable virtual modular system, also
called a virtualized switch chassis.
Different models of Cisco Nexus 2000 Series Fabric Extenders can be connected to the same
parent switch. This connection type is similar to a physical modular chassis that may have
physical line cards of different types, which are located in different slots.
Physically, the parent switch is a separate device that is connected via uplink ports (fabric
extensions) down to each FEX. Logically, the FEX is connected into the parent switch as a
module. The fabric ports appear to the FEX the same way the universal feature card ASIC
would connect to a GEM via the unified port control.
1-76
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-61
The scalable virtual modular system, also called a virtualized switch chassis, can contain up to
32 virtual I/O modules depending on the Cisco Nexus chassis that is being used.
Typically the FEX would be at the top of the rack for easy access to server connections and
reduced cabling runs. The fabric ports would be connected back to the parent switch that is
located at the middle or end of the row.
1-77
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-62
High availability can be achieved by linking two parent switches. This configuration extends
the concept of the virtualized switch chassis by logically linking two parent switches and
becomes a dual supervisor configuration with redundancy across all elements.
When two parent switches are linked, they logically combine to form one virtual switch. The
number of FEXs that are supported by the single virtual switch is the same as if there was one
single physical switch present.
1-78
Data plane:
- Forwarding ASIC redundancy
- Fabric ASIC redundancy
Fabric:
- Isolated and redundant paths
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-63
Control plane:
Data plane:
Fabric:
Supervisor redundancy
Power supply
Fan
1-79
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-65
The Cisco Nexus 22XX (2248TP GE, 2248TP-E, 2232PP 10GE, 2232TM 10GE, and 2224TP
GE) Fabric Extenders are stackable 1-RU chassis that are designed for rack mounting as a ToR
solution. The second generation of FEXs uses a new ASIC, with additional features. The
features include enhanced QoS with eight hardware queues, port channels, access control list
(ACL) classification, and SPAN sessions on host ports. In addition, host ports can be
configured as part of a vPC.
The fabric interfaces are fixed and reserved, marked in yellow on the chassis, as shown in the
figure.
The following features should be noted for the Cisco Nexus 2232 Fabric Extender:
1-80
The Cisco Nexus 2232PP 10GE Fabric Extender supports SFP and SFP+ connectivity on
all ports.
The Cisco Nexus 2232TM 10GE Fabric Extender supports RJ-45 connectivity with eight
SFP+ uplinks. (There is no FCoE support on this model.)
The Cisco Nexus 2232PP 10GE Fabric Extender can be connected to the Cisco Nexus 5000
or 7000 Series Switches.
The Cisco Nexus 2232TM 10GEFabric Extender can be connected to the Cisco Nexus
5000 Series switch
The Cisco Nexus 2232TM 10GE Fabric Extender requires Cisco NX-OS Release 5.0(2)
N2(1) for connectivity to the Cisco Nexus 5000 Series Switches.
Additional features of the Cisco Nexus 2248TP-E Fabric Extender are listed here:
Support for 48 100/1000BASE-T host-facing ports and four 10 Gigabit Ethernet fabric
interfaces
1-81
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-66
The Cisco Nexus B22 Series Blade Fabric Extenders are designed to simplify data center server
access architecture and operations in environments in which third-party blade servers are used.
The Cisco Nexus B22 Series Blade Fabric Extenders behave like remote line cards for a parent
Cisco Nexus switch, together forming a distributed modular system. This architecture
simplifies data center access operations and architecture. The architecture combines the
management simplicity of a single high-density access switch with the cabling simplicity of
integrated blade switches, and ToR access switches.
The Cisco Nexus B22 Series Blade Fabric Extenders provide the following benefits:
Highly scalable, consistent server access: Distributed modular system creates a scalable
server access environment with no reliance on STP, providing consistency between blade
and rack servers.
Simplified operations: One single point of management and policy enforcement using
upstream Cisco Nexus switches eases the commissioning and decommissioning of blades
through zero-touch installation and automatic configuration of FEXs.
Each member of the Cisco Nexus B22 Series Blade Fabric Extenders transparently integrates
into the I/O module slot of a third-party blade chassis, drawing both power and cooling from
the blade chassis itself.
The Cisco Nexus B22 Series Blade Fabric Extenders provide two types of ports: ports for blade
server attachment (host interfaces) and uplink ports (fabric interfaces). Fabric interfaces, which
are located on the front of the Cisco Nexus B22 Series Blade Fabric Extender module, are for
connectivity to the upstream parent Cisco Nexus switch. The figure is a picture of the Cisco
Nexus B22 Blade Fabric Extender for HP, showing its eight fabric interfaces.
1-82
Features
2148T
2224TP
2248TP/TP-E
2232PP/TM-10GE
Fabric ports
4 X 10G SFP+
2 X 10G SFP+
4 X 10G SFP+
8 X 10G SFP+
Fabric port
channels
1 X 4 ports max
L2/L3 hash
1 X 2 ports max
L2/L3/L4 hash
1 X 4 ports max
L2/L3/L4 hash
1 X 8 ports max
L2/L3/L4 hash
Host ports
48 X 1 Gb/s TP
24 X 100 Mb/s or
1 Gb/s TP
48 X 100 Mb/s or
1 Gb/s TP
32 X SFP/SFP+
Not supported
24 maximum (8)
24 maximum (8)
16 maximum (8)
12 X 48 = 576
12 X 24= 288
12 X 48 = 576
12 X 32 = 384
24 X 48 = 1152
24 X 24 = 576
24 X 48 = 1152
24 X 32 = 768
8 X 48 = 384
8 X 24 = 192
8 X 48 = 384
8 X 32 = 256
FCoE/DCB
No
No
No
Yes
Supports FET
No
Yes
Yes
Yes
Dimensions
(in)
1 RU
1.72 X 17.3 X 20
1 RU
1.72 X 17.3 X 17.7
1 RU
1.72 X 17.3 X 17.7
1 RU
1.72 X 17.3 X 17.7
Power
165 W
95 W
110 W
270 W
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-67
The table compares the features of Cisco Nexus 2000 Series Fabric Extenders models.
The primary differences in these features lie in these three factors:
The ability to form host port channels and support the DCB protocol
1-83
Summary
This topic summarizes the key points that were discussed in this lesson.
The Cisco Nexus Family of products has been designed specifically for
the data center and has innovative features to support the data center
requirements.
There are three chassis models in the Cisco Nexus 7000 Series that are
designed to scale from 8.8- to 18.7-Tb/s throughput, providing the ability
to support large-scale deployments.
The Cisco Nexus 7000 Series Supervisor Module supports all three
models and has a main controller board and CMP to help ensure
continual management connectivity even during maintenance tasks.
To provide flexibility, the Cisco Nexus 7000 Series uses a licensing
methodology for features. This model enables customers to upgrade to
additional features without having to install additional software.
Each of the Cisco Nexus 7000 Series models supports up to five fabric
modules. The Fabric 1 module provides up to 230-Gb/s per slot while
the Fabric 2 module provides up to 550-Gb/s per slot if all five fabric
modules are installed.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-68
1-84
DCICTv1.01-69
There are several expansion modules available for the Cisco Nexus
5500 Platform switches, with the Cisco Nexus 5548 switch supporting
one expansion module and the Cisco Nexus 5596 switch supporting up
to three.
Similar to the Cisco Nexus 7000 Series, the Cisco Nexus 5000 Series
Switches support a licensing model for additional features.
The Cisco Nexus 2000 Series Fabric Extenders provide additional 1 or
10 Gb/s ports in the form of an external I/O module for either the Cisco
Nexus 5000 or 7000 Series Switches. All management and switching is
provided by the parent switch, allowing customers to increase port
counts without the increased management overheads.
The Cisco Nexus 2000 Fabric Extenders come in the form of 1 Gb/s
modules or 10 Gb/s modules.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-70
1-85
1-86
Lesson 3
Objectives
Upon completing this lesson, you will be able to describe the features of Cisco MDS Fibre
Channel switches and their relationship to the layered design model. You will be able to meet
these objectives:
Describe the benefits to the data center of deploying the Cisco MDS 9000 Series Multilayer
Switches
Describe the capacities of the Cisco MDS 9500 Series Multilayer Directors chassis
Describe the capabilities of the Cisco MDS 9500 Series supervisor modules
Describe licensing options for the Cisco MDS 9000 Series Multilayer Switches
Describe the capabilities of the Cisco MDS 9000 Series switching modules
Describe the capabilities of the Cisco MDS 9500 Series power supply options
Describe the capabilities of the Cisco MDS 9100 Series Multilayer Fabric Switches
Describe the capabilities of the Cisco MDS 9222i Multiservice Modular Switch
Multilayer Directors
MDS 9148
MDS 9222i
MDS 9124
MDS 9506
Supervisor-2A
SSN-16
4x IOA Engines
16x GigE ports
4-port 10-Gb/s
FC
MDS 9509
4/44-port 8-Gb/s
FC I/O Module
8-port 10-Gb/s
FCoE Module
24/48-port 8-Gb/s
FC I/O Modules
2012Ciscoand/oritsaffiliates.Allrightsreserved.
MDS 9513
48-port 8 Gb/s
Adv FC I/O Modules
32-port 8-Gb/s
Adv FC I/O Modules
DCICTv1.01-5
Multilayer switches are switching platforms with multiple layers of intelligent features, such as
the following:
Ultrahigh availability
Scalable architecture
Ease of management
Multiprotocol support
The Cisco MDS 9000 Family offers an industry-leading investment protection across a
comprehensive product line, offering a scalable architecture with highly available hardware and
software. The product line is based on the Cisco MDS 9000 Family operating system, and has a
comprehensive management platform in Cisco Fabric Manager. The Cisco MDS 9000 Family
offers various application I/O modules and a scalable architecture from an entry-level fabric
switch to a director-class system.
The product architecture is forward- and backward-compatible with I/O modules, offering 1-,
2-, 4-, 8-, and 10-Gb/s Fibre Channel connectivity.
1-88
The Cisco MDS 9000 10-Gbps 8-Port FCoE Module provides up to 88 ports of line rate
performance per chassis for converged network environments. The Fibre Channel over Ethernet
(FCoE) module provides features that bridge the gap between the traditional Fibre Channel
SAN and the evolution to an FCoE network implementation.
The Cisco MDS 9222i Multiservice Modular Switch uses the 18/4 architecture of the Cisco
MDS 9000 Family 18/4-Port Multiservice Module (DS-9304-18K9) line card. In addition, it
includes native support for Cisco MDS Storage Media Encryption (Cisco MDS SME).
The Cisco MDS 9148 Multiyear Fabric Switch is a new 8-Gb/s Fibre Channel switch providing
48 2-, 4-, and 8-Gb/s Fibre Channel ports. The base license supports 16-, 32-, and 48-port
models but can be expanded using the 8-port license.
The Cisco MDS 9513 Multilayer Director chassis only uses the Cisco MDS 9500 Series
Supervisor-2 Module or later. However, the initial Cisco MDS 9500 Series Supervisor-1
Module can be used in the Cisco MDS 9506 Multilayer Director and Cisco
MDS 9509 Multilayer Director. All Cisco MDS 9500 Series chassis support all supervisor
modules.
1-89
Nonblocking architecture:
- Dual-redundant crossbar fabric
- VOQresolve HOL blocking
High bandwidth:
- Up to 2.2-Tb/s aggregate internal bandwidth
- Up to 160-Gb/s switch to switch16 ISLs per port channel
Low latency:
- Less than 20 microsec per hoplink-rate-dependent
Multiprotocol
- Fibre Channel, FCoE, FICON, FCIP, iSCSI
Scalable:
- Store-and-forward architecture
- Multiple VSAN support
Secure:
- Secure management accessSNMPv3 and RBAC
- Port securityport and fabric binding
- Data securityin transit and at rest
Highly available:
- Redundant fabric modules, supervisors, clocks, fans, power supplies, internal paths
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-7
Nonblocking architecture:
High bandwidth:
Up to 2.2-Tb/s aggregate internal bandwidth on the Cisco MDS 9513 Multilayer Director
switch
Low latency:
Multiprotocol:
Fibre Channel, FCoE, fiber connectivity (FICON), Fibre Channel over IP (FCIP), and
Internet Small Computer Systems Interface (iSCSI)
Scalable:
1-90
Store-and-forward architecture
Up to 239 switches per fabric; up to 1000 Virtual Storage Area Networks (VSANs) per
switch
Secure:
Data SecurityIn transit with Fibre Channel link encryption and IPsec
Highly available:
Redundant fabric modules, supervisors, clocks, fans, power supply modules, internal paths
Fully redundant:
No single point of failure
14 RU
7 RU
14 RU
MDS 9506
6 chassis slots
2 supervisors
4 line cards
192 Fibre Channel
ports max
2012Ciscoand/oritsaffiliates.Allrightsreserved.
MDS 9509
9 chassis slots
2 supervisors
7 line cards
336 Fibre Channel
ports max
MDS 9513
13 chassis slots
2 supervisors
11 line cards
The Cisco MDS 9500 Series Multilayer Directors are enterprise-class multilayer director
switches. They provide high availability, multiprotocol support, advanced scalability, security,
nonblocking fabrics that are 10-Gb/s-ready, and a platform for storage management. The Cisco
MDS 9500 Series allows you to deploy high-performance SANs with a lower total cost of
ownership.
The Cisco MDS 9500 Series Multilayer Director Switches have a rich set of intelligent features
and hardware-based services.
The chassis for the Cisco MDS 9500 Series switches are available in three sizes: Cisco MDS
9513, 14 rack units (14 RU), Cisco MDS 9509 14 RU, and Cisco MDS 9506 7 RU.
1-91
1-92
Dual
Supervisors
with OOB
Dual System
Clocks
Multiple fans
Hot-swappable modules
Environmental monitoring
Multiple
Fans
Line-Card
Temperature
Sensors
Dual
Crossbars
Modular
Line Cards
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Dual Power
Supplies
DCICTv1.01-9
The Cisco MDS 9000 switch high availability goal is to exceed five 9s, or 99.999 percent, of
uptime per year. This level of availability is equal to only 5 minutes of downtime per year and
requires physical hardware redundancy, software availability, and logical network availability.
Hardware availability through redundancies and monitoring functions that are built into a Cisco
MDS 9000 Series switch system include dual power supply modules, dual supervisors with outof-band (OOB) management channels, dual fabric crossbars, dual system clocks, hot-swappable
modules with power and cooling management, and environmental monitoring.
These necessary features provide a solid, reliable, director-class platform that is designed for
mission-critical applications.
1-93
High-performance, integrated
crossbar
Enhanced crossbar arbiter
PowerPC management
processor
Cisco MDS 9513 Multilayer
Director requires Supervisor-2
Supervisor-2A introduces FCoE
support
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-11
The Cisco MDS 9500 Series Supervisor-2 Module is an upgraded version of the Cisco MDS
9500 Series Supervisor-1 Module with additional flash memory, RAM and NVRAM memory,
and redundant BIOS. It can be used in any Cisco MDS 9500 Series Multilayer Directors switch.
When it is used in a Cisco MDS 9506 or 9509 Multilayer Director switch, the integral crossbar
is used. When it is used in the Cisco MDS 9513 Multilayer Director Switch, the integral
crossbar is bypassed, and the Cisco MDS 9513 Crossbar Switching Fabric Modules are used
instead.
The Supervisor-2 Module supports 1024 destination indexes providing up to 528 ports in a
Cisco MDS 9513 Multilayer Director switch when only generation-2 or higher modules are
used. If any generation-1 module is installed in the Cisco MDS 9513 Multilayer Director
switch, then only 252 ports can be used.
The Cisco MDS 9500 Series Supervisor-2A Module is designed to integrate multiprotocol
switching and routing, intelligent SAN services, and storage applications onto highly scalable
SAN switching platforms. The Supervisor-2A Module enables intelligent, resilient, scalable,
and secure high-performance multilayer SAN switching solutions. In addition to providing the
same capabilities as the Supervisor-2 Module, the Supervisor-2A Module supports deployment
of FCoE in the chassis of the Cisco MDS 9500 Series Multilayer Directors. The Cisco MDS
9000 Family lowers the total cost of ownership (TCO) for storage networking by combining
robust and flexible hardware architecture, multiple layers of network and storage intelligence,
and compatibility with all Cisco MDS 9000 Family switching modules. This powerful
combination helps organizations build highly available, scalable storage networks with
comprehensive security and unified management.
1-94
MDS 9506
or MDS 9509
X
Internal Crossbar Disabled
2012Ciscoand/oritsaffiliates.Allrightsreserved.
MDS 9513
Fabric Module
MDS 9513
DCICTv1.01-12
The internal crossbar of the Supervisor-2 and -2A Modules provides backward-compatibility
and is used for switching in first-generation chassis, such as the Cisco MDS 9506 or 9509
Multilayer Director switch.
In the Cisco MDS 9513 Multilayer Director chassis, the Supervisor-2 internal crossbar is
disabled and the higher bandwidth external Cisco MDS 9513 Crossbar Switching Fabric
Modules are used instead.
1-95
DCICTv1.01-13
Both Cisco MDS 9513 Crossbar Switching Fabric Modules are located at the rear of the chassis
and provide a total aggregate bandwidth of 2.2 Tb/s.
Each fabric module is connected to each of the line cards via dual redundant 24-Gb/s channels
making a total of 96 Gb/s per slot.
A single Cisco MDS 9513 Crossbar Switching Fabric Modules can support full bandwidth on
all connected ports in a fully loaded Cisco MDS 9513 Multilayer Director switch without
blocking.
The arbiter schedules frames at over 1 billion frames per second (f/s), ensuring that blocking
will not occur even when the ports are fully utilized.
1-96
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-14
Required for Cisco MDS 9000 24- and 48-port 8-Gbps Fibre Channel Switching Modules
Note
The Fabric 2 module is not required for Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized
Fibre Channel Switching Module.
Requires switch reload during upgrade from Cisco MDS SAN-OS Software Release 3.x to
Cisco Nexus Operation System (Cisco NX-OS) Software Release 4.1 to support 24- and
48-port 8-Gbps Fibre Channel Switching Modules
1-97
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Module-Based Licensing
DCICTv1.01-16
Licensing allows access to specified premium features on the switch after you install the
appropriate licenses. Licenses are sold, supported, and enforced for all releases of the Cisco
NX-OS Software.
The licensing model that is defined for the Cisco MDS 9000 Series product line has two
options:
Feature-based licensing covers features that are applicable to the entire switch. The cost
varies based on per-switch usage.
Module-based licensing covers features that require additional hardware modules. The cost
varies based on per-module usage.
1-98
Note
The FCIP license that is bundled with Cisco MDS 9222i switches enables FCIP on the two
fixed IP Services ports only. The features that are enabled on these ports by the bundled
license are identical to the features that are enabled by the FCIP license on the Cisco MDS
9000 14/2-port Multiprotocol Services Module. If you install a module with IP ports in the
empty slot on the Cisco MDS 9222i switch, you will need a separate FCIP license to enable
FCIP on the IP ports of the additional I/O module.
Note
Licensing on the Cisco MDS 9000 IP Storage Services Module (Cisco MDS 9000 16-Port
Storage Services Node [SSN-16]) has the following limitations:
Only one licensed feature can run on an SSN-16 engine at any time.
On a given SSN-16 module, you can mix and match the Cisco MDS Input/Output
Accelerator (MDS 9000 IOA) license and SAN extension over IP license on the four service
engines in any combinationfor example, 4+0, 1+3, 2+2, 3+1, or 0+4, respectively.
The SSN-16 module does not support mix and match for the Cisco MDS SME license.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-17
The Cisco NX-OS Software is the underlying system software that powers the award-winning
Cisco MDS 9000 Series Multilayer Switches. Cisco NX-OS is designed for SANs in the best
traditions of Cisco IOS Software to create a strategic SAN platform of superior reliability,
performance, scalability, and features.
In addition to providing all the features that the market expects of a storage network switch,
Cisco NX-OS provides many unique features that help the Cisco MDS 9000 Series deliver low
total cost of ownership (TCO) and a quick return on investment (ROI).
Enterprise package: Adds a set of advanced features that are recommended for all
enterprise SANs.
SAN Extension over IP package: Enables FCIP for IP storage services and allows the
customer to use the IP storage services to extend SANs over IP networks.
Mainframe package: Adds support for the fiber connectivity (FICON) protocol. FICON
VSAN support is provided to help ensure that there is true hardware-based separation of
FICON and open systems. Switch cascading, fabric binding, and intermixing are also
included in this package.
1-99
Data Center Network Manager for SAN package: Extends Cisco Fabric Manager by
providing historical performance monitoring for network traffic hotspot analysis,
centralized management services, and advanced application integration for greater
management efficiency.
Storage Services Enabler package: The Cisco MDS 9000 Storage Services Enabler (SSE)
Package enables network-hosted storage applications to run on the Cisco MDS 9000 SSN-16.
On-Demand Port Activation: Enables ports in bundles of eight as port requirements
expand.
Storage Media Encryption: Adds support for the Cisco MDS SME for a Cisco MDS 9000
SSN-16 engine or a Cisco MDS 9222i switch.
Data Mobility Manager: Adds support for the Cisco Data Mobility Manager (Cisco
DMM) feature on the Cisco MDS 9000 IP Storage Services Module (Cisco 9000 IP
Module) in a Cisco MDS 9000 Series switch.
Cisco IOA: Activates Cisco MDS 9000 IOA for the Cisco MDS 9000 SSN-16 module.
Extended Remote Copy (XRC): Enables support for FICON XRC acceleration on the
Cisco MDS 9222i switch.
Simple packaging:
Simple bundles for advanced features that provide significant value
All upgrades included in support pricing
High availability:
Nondisruptive installation
120-day grace period for enforcement*
Ease of use:
Electronic licenses:
- No separate software images for licensed features
- Licenses installed on switch at factory
- Automated license key installation
* Cisco TrustSec and the Port Activation Licenses do not have a grace period
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-18
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-20
The Cisco MDS 9000 24-Port 1/2/4/8-Gbps Fibre Channel Switching Module provides 8
Gb/s at 2:1 oversubscription and Full Rate (FR) bandwidth on each port at 1, 2, and 4 Gb/s.
The Cisco MDS 9000 48-Port 1/2/4/8-Gbps Fibre Channel Switching Module provides 8
Gb/s at 4:1 oversubscription, 4 Gb/s at 2:1 oversubscription and FR bandwidth at 1 and 2
Gb/s on each port.
The Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized Fibre Channel Switching Module
provides four ports at 8 Gb/s with the remaining 44 ports at 4 Gb/s.
1-101
The Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module delivers
line-rate performance across all ports and is ideal for high-end storage subsystems and for
ISL connectivity.
The Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module
provides higher port density and is ideal for connection of high-performance virtualized
servers. With Arbitrated Local Switching enabled, this module supports 48 ports of line
rate 8 Gb/s and is perfect for deploying dense virtual machine (VM) clusters with locally
mapped storage. For traffic that is switched across the backplane, this module supports
1.5:1 oversubscription at 8-Gb/s Fibre Channel rate across all ports.
The 8-Gbps Advanced Fibre Channel Switching Modules are compatible with all Cisco MDS
9500 Series Multilayer Directors.
1-102
Cisco FlexSpeed: Cisco MDS 9000 Family 8-Gbps Advanced Fibre Channel Switching
Modules are equipped with Cisco FlexSpeed technology, which enables ports on the Cisco
MDS 9000 Family 8-Gbps Advanced Fibre Channel Switching Modules to be configured
as either 1/2/4/8-Gb/s or 10-Gb/s Fibre Channel interfaces. The 10-Gb/s interfaces enable
reduced cabling for ISLs because they provide a 50 percent higher data rate than 8-Gb/s
interfaces. With integrated Cisco TrustSec encryption, the 10-Gb/s links provide secure,
high-performance native Fibre Channel SAN Extension. Both 32- and 48-port modules
support up to 24 10-Gb/s Fibre Channel interfaces. These modules enable consolidation of
1/2/4/8-Gb/s and 10-Gb/s ports into the same Fibre Channel switching module, conserving
space on the Cisco MDS 9000 Series chassis.
Cisco Arbitrated Local Switching: Cisco MDS 9000 Family 8-Gbps Advanced Fibre
Channel Switching Modules provide line-rate switching across all the ports on the same
module without performance degradation or increased latency for traffic that is exchanged
with other modules in the chassis. This capability is achieved through Cisco MDS 9500
Series Multilayer Directors crossbar architecture with a central arbiter arbitrating fairly
between local traffic and traffic to and from other modules. Local switching can be enabled
on any Cisco MDS 9500 Series director-class chassis.
Integrated Hardware-Based VSANs and Inter-VSAN Routing (IVR): Cisco MDS 9000
Family 8-Gbps Advanced Fibre Channel Switching Modules enable deployment of largescale consolidated SANs while maintaining security and isolation between applications.
Integration into port-level hardware allows any port or any VM in a system or fabric to be
partitioned into any VSAN. Integrated hardware-based IVR provides line-rate routing
between any ports in a system or fabric without the need for external routing appliances.
Resilient High-Performance ISLs: Cisco MDS 9000 Family 8-Gbps Advanced Fibre
Channel Switching Modules support high-performance ISLs consisting of 8- or 10-Gb/s
secure Fibre Channel. Advanced Fibre Channel switching modules also offer port channel
technology. These modules provide up to 16 links spanning any port on any module within
a chassis that is grouped into a logical link for added scalability and resilience. Up to 4095
buffer-to-buffer credits (BB_Credits) can be assigned to a single Fibre Channel port,
providing industry-leading extension of storage networks. Networks may be extended to
greater distances by tuning the BB_Credits to ensure that full link bandwidth is maintained.
Intelligent Fabric Services: Cisco MDS 9000 Family 8-Gbps Advanced Fibre Channel
Switching Modules provide integrated support for VSAN technology, Cisco TrustSec
encryption, access control lists (ACLs) for hardware-based intelligent frame processing,
and advanced traffic-management features to enable deployment of large-scale enterprise
storage networks. The 8-Gbps Advanced Fibre Channel Switching Modules provide Fibre
Channel Redirect (FC-Redirect) technology, which is a distributed flow redirection
mechanism that can enable redirection of a set of traffic flows to an intelligent Fabric
Service such as Cisco MDS 9000 IOA, Cisco DMM, and SME.
Advanced FICON Services: Cisco MDS 9000 Family 8-Gbps Advanced Fibre Channel
Switching Modules support 1/2/4/8-Gb/s FICON environments, including cascaded FICON
fabrics, VSAN-enabled intermix of mainframe and open systems environments, and N-Port
ID Virtualization (NPIV) for mainframe Linux partitions. FICON Control Unit Port (CUP)
support enables in-band management of Cisco MDS 9000 Series switches from the
mainframe management console.
Comprehensive Security Framework: Cisco MDS 9000 Family 8-Gbps Advanced Fibre
Channel Switching Modules support RADIUS and TACACS+, Fibre Channel Security
Protocol (FC-SP), Secure FTP (SFTP), Secure Shell (SSH) Protocol, and Simple Network
Management Protocol Version 3 (SNMPv3) implementing Advanced Encryption Standard
(AES), VSANs, hardware-enforced zoning, ACLs, and per-VSAN role-based access
control (RBAC). Cisco TrustSec Fibre Channel Link Encryption that is implemented by the
8-Gbps Advanced Fibre Channel switching modules secures sensitive data within or across
data centers over high-performance 8- and 10-Gb/s Fibre Channel links.
Sophisticated Diagnostics: Cisco MDS 9000 Family 8-Gbps Advanced Fibre Channel
Switching Modules provide intelligent diagnostics, protocol decoding, and network
analysis tools as well as integrated Call Home capability for added reliability, faster
problem resolution, and reduced service costs.
1-103
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-21
The Cisco MDS 9000 16-Port Storage Services Node (SSN-16) hosts four independent service
engines.
Each of the service engines can be activated individually and incrementally to scale as business
requirements change or they can be configured to run separate applications.
Based on the single service engine originally in the Cisco MDS 9000 18/4-Port Multiservice
Module, this four-to-one consolidation delivers dramatic hardware savings and frees valuable
slots in the chassis of the Cisco MDS 9500 Multilayer Directors.
Supported applications include the following:
1-104
Metropolitan-area network (MAN) link optimization with Cisco MDS 9000 IOA
Redundant mode:
- Default mode
- Power capacity of the lower-capacity supply module
- Sufficient power is available in case of failure
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-23
Power supply modules are configured in redundant mode by default, but they can also be
configured in a combined, or nonredundant, mode:
In redundant mode, the chassis uses the power capacity of the lower-capacity power supply
module so that sufficient power is available in a single power supply failure.
In combined mode, the chassis uses twice the power capacity of the lower-capacity power
supply modules. Sufficient power might not be available in a power supply failure in this
mode. If there is a power supply failure and the real power requirements for the chassis
exceed the power capacity of the remaining power supply modules, the entire system is
reset automatically. This reset will prevent permanent damage to the power supply module.
In both modes, power is reserved for the supervisor and fan assemblies. Each supervisor
module has roughly 220 W in reserve, even if there is only one that is installed, and the fan
module has 210 W in reserve. If there is insufficient power, after supervisors and fans are
powered, line card modules are given power from the top of the chassis down:
After the reboot, only those modules that have sufficient power are powered up.
If the real power requirements do not trigger an automatic reset, no module is powered
down. Instead, no new module is powered up.
In all cases of power supply failure or removal, a syslog message is printed, a Call Home
message is sent (if configured), and an SNMP trap is sent.
Note
1-105
Power
Distribution
Unit-A
Power
Distribution
Unit-B
Power
Distribution
Unit-A
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Power
Distribution
Unit-B
DCICTv1.01-24
The 6000-W AC power supply modules for the Cisco MDS 9513 Multilayer Director are
designed to provide output power for the modules and fans. Each power supply module has two
AC power connections and provides power as follows:
The figure illustrates incorrect and correct connection sequences to external power distribution
units. When the power supply modules are configured in redundant mode (default), the wrong
connection sequence can leave the switch with only 2900 W, which can result in the chassis
shutting down line card modules. This event would require manual configuration to set
combined mode, upon which 5800-W power capacity is available. There is no automatic
provision to configure combined mode if there is a loss of an external power source.
The correct connection sequence provides the complete 6000-W capacity if an external power
source fails.
1-106
MDS 9124:
24 line-rate 4-Gb/s Fibre
Channel ports
8-port base configuration
8-port incremental licensing
NPV and NPIV support
MDS 9148:
48 line-rate 8-Gb/s Fibre
Channel ports
16-, 32- or 48-port base
configuration
8-port incremental licensing
NPV and NPIV support
NPV = N-Port Virtualizer
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-26
1-107
The Cisco MDS 9148 Fabric Switch supports a quick configuration, zero-touch, immediately
active (plug-and-play) features. There are task wizards that allow it to be deployed quickly and
easily in networks of any size. Powered by Cisco NX-OS Software, it includes advanced
storage networking features and functions and is compatible with Cisco MDS 9500 Series
Multilayer Directors and Cisco MDS 9200 Series Multilayer Switches, providing transparent,
end-to-end service delivery in core-edge deployments.
1-108
1x expansion slot
18 Fibre Channel ports at 4 Gb/s
4 Gigabit Ethernet ports for FCIP and iSCSI
Supports Cisco SME, IOA, SANTap, and DMM
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-28
The Cisco MDS 9222i Multiservice Modular Switch delivers multiprotocol and distributed
multiservice convergence, offering high-performance SAN extension and disaster recovery
solutions. In addition, it supports intelligent Fabric Services such as Cisco MDS Storage Media
Encryption (Cisco MDS SME) and cost-effective multiprotocol connectivity. The Cisco MDS
9222i switch has a compact form factor, and the modularity of the expansion slot supports
advanced capabilities normally only available on director-class switches. The Cisco MDS
9222i switch is an ideal solution for departmental and remote branch-office SANs requiring the
features that are present in a director, but at a lower cost of entry.
Product Highlights:
High-density Fibre Channel switch, which scales up to 66 Fibre Channel ports
Integrated hardware-based virtual fabric isolation with VSANs and Fibre Channel Routing
with IVR
Remote SAN extension with high-performance FCIP
Long distance over Fibre Channel with extended buffer-to-buffer credits
Multiprotocol and Mainframe support (Fibre Channel, FCIP, iSCSI, and FICON)
IPv6 capable
Platform for intelligent fabric applications such as Cisco MDS SME
Cisco In-Service Software Upgrade (ISSU)
Comprehensive network security framework
Provides hosting, assisting, and acceleration of storage applications such as volume
management, data migration, data protection, and backup with Cisco MDS 9000 switch
Cisco 9000 IP Module
Supports Cisco SANTap and DMM
2012 Cisco Systems, Inc.
1-109
Summary
This topic summarizes the key points that were discussed in this lesson.
The Cisco MDS 9000 Series switch is designed for SAN environments in
the data center, providing a range of products to suit small- to largescale deployments.
The Cisco MDS 9500 Series chassis includes three models, the 6-, 9and 13-slot chassis.
The Cisco MDS 9500 Series chassis is modular, and has two slots
available for supervisor engines. There are two current supervisor
modules, the Supervisor-2 and the Supervisor-2A. The Supervisor-2A is
required if FCoE support is to be enabled on the chassis.
To ensure flexibility when upgrading to additional features, the Cisco
MDS 9000 Series makes use of a licensing model. This licensing model
provides seamless nondisruptive upgrades to additional feature sets.
There are several modules that are supported in the Cisco MDS 9500
Series chassis and the Cisco MDS 9222i switch. These modules support
line rates up to 10 Gb/s depending on the module that is being
deployed.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-29
The Cisco MDS 9000 Series uses redundant power supply modules.
There are several redundancy modes available, and customers should
be aware of the cabling requirements for the Cisco MDS 9513 Multilayer
Director chassis power supply modules.
The Cisco MDS 9100 Series of switches are fabric switches designed
for the access layer of the SAN infrastructure. The two models available
today are the Cisco MDS 9124 Fabric Switch and the Cisco MDS 9148
Fabric Switch. The Cisco MDS 9148 Fabric Switch provides 8 Gb/s
connectivity.
The Cisco MDS 9222i switch is a semimodular switch providing
connectivity at the access layer, small core, or for site-to-site
connectivity through the use of features such as FCIP. It has one fixed
module slot and one modular module slot.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
1-110
DCICTv1.01-30
Lesson 4
Objectives
Upon completing this lesson, you will be able to perform an initial configuration and validate
common features of the Cisco Nexus 7000 and 5000 Series Switches. You will be able to meet
these objectives:
Describe how to connect to the console port of the Cisco Nexus 7000 and 5000 Series
Switches
Describe the how to run the initial setup script for the Cisco Nexus 7000 and 5000 Series
Switches
Describe how to connect to the CMP on the Cisco Nexus 7000 Series Switches
Describe how to use SSH to connect to the management VRF on the Cisco Nexus 7000 and
5000 Series Switches
Describe the ISSU capabilities of the Cisco Nexus 7000 and 5000 Series Switches
Verify VLANs on the Cisco Nexus 7000 and 5000 Series Switches
Describe the control, management, and data planes on the Cisco Nexus 7000 and 5000
Series Switches
Describe CoPP on the Cisco Nexus 7000 and 5000 Series Switches
Describe important CLI commands on the Cisco Nexus 7000 and 5000 Series Switches
9600 b/s
8 data bits
No parity
1 stop bit
No flow control
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Nexus Switch
Supervisor Module
Console Port
VT100
DCICTv1.01-5
Initial connectivity to a switch is via the console port. The console requires a rollover RJ-45
cable. The terminal setup steps are listed in this figure. After the switch has booted up, it
automatically goes into the default setup script the first time that the switch is started and
configured.
1-112
Start up device
Enter setup
command
Ctrl-C
Enter setup
script?
No or
Ctrl-C
Display EXEC
prompt
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Yes
Configure the
device
Edit config?
Yes
No or
Ctrl-C
Save config?
Yes
No or
Ctrl-C
DCICTv1.01-7
The Cisco Nexus Operating System (Cisco NX-OS) setup utility is an interactive CLI mode
that guides you through a basic (also called a startup) configuration of the system. The setup
utility allows you to configure enough connectivity for system management and to build an
initial configuration file using the system configuration dialog.
The setup utility is used mainly to configure the system initially when no configuration exists.
However, it can be used at any time for basic device configuration. Any configured values are
kept when you skip steps in the script. For example, if there is already a configured mgmt0
interface address, the setup utility does not change that value if you skip that step. However, if
there is a default value for any step, the setup utility changes the configuration using that
default and not the configured value.
Note
Be sure to configure the IP version 4 (IPv4) route, the default network IPv4 address, and the
default gateway IPv4 address to enable Simple Network Management Protocol (SNMP)
access.
1-113
Would you like to enter the basic configuration dialog (yes/no): yes
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-8
The admin user is the only user that the switch knows about initially. On all Cisco NX-OS
devices, strong passwords are the default. When configuring the password of the admin user,
you need to use a minimum of eight charactersalpha, upper, and lower case, and numerical.
Once the admin user password is configured, you are asked if you wish to enter the basic
system configuration dialog. Answering yes takes you through the basic setup.
1-114
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-9
This figure shows the initial configuration default questions. If a question has a default answer
in square brackets [xx], then the Enter key can be used to accept the default parameter.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-10
Once the initial parameters have been entered, the switch provides the initial configuration for
you to verify. If it is correct, then there is no need to edit the configuration; you just need to
accept the Use this configuration and save it? question by pressing Enter to save the
configuration.
1-115
Data
Network
Terminal Servers
(OOB
console connectivity)
OOB
Management
Network
Console Cables
CMP
CMP
Data
Network
CMP
CMP
OOB
Management
Network
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-12
The CMP provides out-of-band (OOB) management and monitoring capability independent
from the primary operating system. The CMP enables lights-out Remote Monitoring (RMON)
and management of the supervisor module, all other modules, and the Cisco Nexus 7000 Series
system without the need for separate terminal servers.
Key features of the CMP include the following:
1-116
Monitoring of supervisor status and initiation of resets: Removes the need for separate
terminal server devices for OOB management.
System reset while retaining OOB Ethernet connectivity: Complete visibility during the
entire boot process.
Access to supervisor logs: Access to critical log information enables rapid detection and
prevention of potential system problems.
Dedicated front-panel LEDs: CMP status is clearly identified separately from the
supervisor.
Exit the CMP console and return to the Cisco NX-OS CLI on the
control processor.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-13
You can access the CMP from the active supervisor module. Before you begin, ensure that you
are in the default virtual device context (VDC).
When the control processor and CMP are both operational, you can log into the CMP through
the control processor using your Cisco NX-OS username and password or the admin username
and password. If the control processor is configured with RADIUS or TACACS, then your
authentication is also managed by RADIUS or TACACS. If the control processor is
operational, the CMP accepts logins from users with network-admin privileges. The CMPs use
the same authentication mechanism to configure the control processor (that is, RADIUS,
TACACS, or local). The control processor automatically synchronizes the admin password
with the active and standby CMP. You are then able to use the admin username and password
when a control processor is not operational.
1-117
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-14
The Secure Shell (SSH) Protocol server feature enables an SSH client to make a secure,
encrypted connection to a Cisco Nexus Series switch. SSH uses strong encryption for
authentication. The SSH server in the Cisco Nexus Series switch interoperates with publicly
and commercially available SSH clients.
By default, the SSH server is enabled on the Cisco Nexus Series switch. The Cisco Nexus
Series switch supports only SSH version 2 (SSHv2).
The Telnet Protocol enables a user at one site to establish a TCP/IP connection to a login server
at another site. However, all parameters and keystrokes are passed in cleartext over the
network, unlike SSH, which is encrypted. The Telnet server is disabled by default on the Cisco
Nexus Series Switch.
To reach a remote system, both Telnet and SSH can accept any of the following:
1-118
Device name that is resolved via the local host file or a distributed name server
10.2.2.2
Telnet/SSH 10.2.2.2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-15
Control processor
SSH
Telnet
To access the CMP by SSH or Telnet, you must enable those sessions on the CMP. (By default,
the SSH server session is enabled.)
1-119
VRF virtualizes the IP routing control and data plane functions inside a
router or Layer 3 switch.
VRFs are used to build Layer 3 VPNs
A VRF consists of the following:
- A subset of the router interfaces
- A routing table or RIB
- Associated forwarding data structures or FIB
- Associated routing protocol instances
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-17
To provide logical Layer 3 separation within a Layer 3 switch or router, the data plane and
control plane functions of the device are segmented into different Layer 3 VPNs. This process
is similar to the way that a Layer 2 switch segments the Layer 2 control and data plane into
different VLANs.
The core concept in Layer 3 VPNs is a VRF instance. This instance consists of all the data
plane and control plane data structures and processes that together define the Layer 3 VPN.
A VRF includes the following components:
1-120
A subset of the Layer 3 interfaces on a router or Layer 3 switch: Similar to how Layer
2 ports are assigned to a particular VLAN on a Layer 2 switch, the Layer 3 interfaces of the
router are assigned to a VRF. Because the elementary component is a Layer 3 interface,
this component includes software interfaces, such as subinterfaces, tunnel interfaces,
loopback interfaces, and switch virtual interfaces (SVIs).
A routing table or Routing Information Base (RIB): Traffic between Layer 3 interfaces
that are in different VRFs should remain separated. Therefore, a separate routing table is
necessary for each VRF. The separate routing table ensures that traffic from an interface in
one VRF cannot be routed to an interface in a different VRF.
A Forwarding Information Base (FIB): The routing table or RIB is a control plane data
structure. From it, an associated FIB is calculated to be used in actual packet forwarding.
The FIB also needs to be separated by the VRF.
Routing protocol instances: To ensure control plane separation between the different Layer
3 VPNs, implement routing protocols on a per-VRF basis. To accomplish this task, you can
run an entirely separate process for the routing protocol in the VRF. Or you can use a
subprocess or routing protocol instance in a global process that is in charge of the routing
information exchange for the VRF.
Management VRF
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Default VRF
DCICTv1.01-18
There are two VRFs that are enabled on the Cisco Nexus Series switch. The two VRFs cannot
be deleted.
The management interface is permanently part of the management VRF. All remaining
interfaces are, by default, part of the default VRF. The VRFs allow the management traffic to
be logically isolated from the traffic on the switch ports.
Note
1-121
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-19
It is important to verify reachability to any Layer 3 interfaces that are configured on the Cisco
Nexus Series switch. Use the ping command to ensure that the address of the default gateway,
or any other device in the network, is reachable.
Unless otherwise specified, the ping and all other Layer 3 traffic originate from the default
VRF. Therefore, it is important to specify which VRF is to be used.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-20
The figure shows how to initiate an SSH session from the local switch.
1-122
Cisco NX-OS Software supports ISSU from Release 4.2.1 and higher
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-22
An ISSU is initiated manually either through the CLI by an administrator or via the
management interface of the Cisco Data Center Network Manager software platform. When an
ISSU is initiated, it updates the following components on the system as needed:
Kickstart image
Supervisor BIOS
System image
Following data center network design best practices should help ensure switch-level
redundancy and continuous service to the attached servers. Such practices include dual-homing
servers for redundancy with redundant switches at the access layer.
Once initiated, the ISSU installer service begins the ISSU cycle. The upgrade process is
composed of several phased stages that are designed to minimize overall system impact with no
impact to data traffic forwarding.
1-123
Fabric
Parent Switch
Parent Switch
Extenders
Fabric Extenders
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-23
When an ISSU is performed on the parent switch, the Cisco NX-OS Software automatically
downloads and upgrades each of the attached Cisco Nexus 2000 Series Fabric Extenders. This
process results in each Cisco Nexus 2000 Series Fabric Extender being upgraded to the Cisco
NX-OS Software release of the parent switch after the parent switch ISSU completes.
The Cisco Nexus 5000 Series Switches have only a single supervisor. The ISSU should be
hitless for the switch, its modules, and attached FEXs while the ISSU upgrades the system,
kickstart, and BIOS images.
During the ISSU, control plane functions of the switch that is undergoing ISSU are temporarily
suspended and configuration changes are disallowed. The control plane is brought online again
within 80 seconds to allow protocol communications to resume.
In FEX virtual port channel (vPC) configurations, the primary switch is responsible for
upgrading the FEX. The peer switch is responsible for holding onto its state until the ISSU
process completes. When one peer switch is undergoing an ISSU, the other peer switch locks
the configuration until the ISSU is completed.
When a Layer 3 license is installed, the switches in the Cisco Nexus 5500 Platform do not
support an ISSU.
Hot-swapping a Layer 3 module in the Cisco Nexus 5000 Series switch is not supported during
an ISSU operation.
Note
Additionally, always refer to the latest release notes for ISSU guidelines.
1-124
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-24
After you acquire, download, and copy the new Cisco NX-OS Software files to your upgrade
location, which is typically the bootflash, issue the show install all impact command.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-25
This command allows the administrator to determine which components of the system will be
affected by the upgrade before performing the upgrade of the Cisco NX-OS Software.
1-125
N5K-A# dir
3246
3341
503
3457
34342400
147395647
4096
4096
4096
Jan
Feb
Jan
Feb
Jan
Jan
Jan
Jan
Jan
31
02
05
04
05
05
01
01
01
02:17:32
14:53:50
08:18:12
20:06:42
08:11:30
08:12:34
08:15:26
08:15:26
08:15:26
2009
2009
2012
2009
2012
2012
2009
2009
2009
duck1
duck2
license_SSI154207FW_9.lic
mts.log
n5000-uk9-kickstart.5.1.3.N1.1.bin
n5000-uk9.5.1.3.N1.1.bin
vdc_2/
vdc_3/
vdc_4/
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-26
Use the dir command to verify that the required Cisco NX-OS files are present on the bootflash
of the switch to be upgraded (or downgraded). Note that each Cisco NX-OS Software version
has two separate filesa kickstart file and the system filethat must have the same version.
The kickstart image is contained in an independent file. The system image, BIOS, and code for
the modules, including the FEX, are all contained in a single system file.
There can be multiple versions of Cisco NX-OS Software on the bootflash in addition to other
files.
Note
By default, the kickstart image contains the word kickstart in the filename. The system file
does not contain the word system.
1-126
Old Versions
New Versions
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-27
Before you perform a Cisco Nexus ISSU, verify that kickstart and system images are in the
bootflash of both supervisor modules.
1-127
Standby
Active
etc.
PIM
BGP
OSPF
etc.
PIM
BGP
Release
5.1.3
OSPF
Release
5.1.3
HA Manager
HA Manager
Linux Kernel
Linux Kernel
Release
5.1.3
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-28
In a Cisco Nexus 7000 Series chassis with dual supervisors, you can use the Cisco NX-OS
ISSU feature to upgrade the system software while the system continues to forward traffic. A
Cisco NX-OS ISSU uses the existing features of Cisco Nonstop Forwarding (Cisco NSF) with
Stateful Switchover (SSO) to perform the software upgrade with no system downtime.
When an ISSU is initiated, the Cisco NX-OS ISSU updates (as needed) the following
components on the system:
It does not change any configuration settings or network connections during the upgrade.
Any changes in the network settings may cause a disruptive upgrade.
In some cases, the software upgrades may be disruptive. These exception scenarios can
occur under the following conditions:
1-128
Configuration mode is blocked during the Cisco ISSU to prevent any changes.
Verifying VLANs
This topic explains how to verify VLANs on the Cisco Nexus 7000 and 5000 Series Switches.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-30
To view the configured VLANs, their operational states, and the associated interfaces, use the
show vlan brief command.
VLANs that have been administratively defined are listed in addition to the default VLAN. If
the VLAN was created successfully and not disabled for any reason, it shows a status of active.
All ports are associated to the default VLAN until they are defined by the administrator as
participating in another VLAN.
1-129
DESCRIPTION
----------------Multicast
Online Diagnostic
ERSPAN
Satellite
Current
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-31
A switch port belongs to a VLAN. Unicast, broadcast, and multicast packets are forwarded and
flooded only to end stations in that VLAN. Consider each VLAN to be a logical network.
Packets that are destined for a station that does not belong to the same VLAN must be
forwarded through a router.
This table describes the VLAN ranges.
VLAN Numbers
Range
Use
Normal
Cisco default. You can use this VLAN, but you cannot
modify or delete it.
21005
Normal
10063967 and
40484093
Extended
1-130
Internally
allocated
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-33
Cisco NX-OS Software provides isolation between control and data forwarding planes within
the device. This isolation means that a disruption within one plane does not disrupt the other.
1-131
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-34
The Cisco Nexus 5000 Series Switches use a scalable cut-through input queuing switching
architecture. The architecture is implemented primarily by two ASICs developed by Cisco:
A set of unified port controllers (UPCs) that perform data plane processing
The UPC ASIC manages all packet-processing operations on ingress and egress interfaces.
UPCs provide distributed packet forwarding capabilities. Four switch ports are managed by
each UPC.
A single-stage UCF ASIC interconnects all UPCs. The UCF schedules and switches packets
from ingress to egress UPCs and is simply known as the fabric. All port-to-port traffic passes
through the UCF.
1-132
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-35
The figure represents the architectural diagram of the Cisco Nexus 5000 Series Switches data
plane. It describes a distributed forwarding architecture.
Each UPC manages four 10 Gigabit Ethernet ports and makes forwarding decisions for the
packets that are received on those ports. After a forwarding decision is made, the packets are
queued in virtual output queues (VOQs) where they wait to be granted access to the UCF.
Because of the cut-through characteristics of the architecture, packets are queued and dequeued
before the full packet contents have been received and buffered on the ingress port. The UCF is
responsible for coupling ingress UPCs to available egress UPCs. The UCF internally connects
each 10 Gigabit Ethernet, FCoE-capable interface through fabric interfaces running at 12 Gb/s.
This 20 percent overspeed helps ensure line-rate throughput regardless of the packet
manipulation that is performed in the ASICs.
Classic Ethernet
Fibre Channel
FCoE
On the ingress side, UPC manages the physical details of different media as it maps the
received packets to a unified internal packet format. The UPC then makes forwarding decisions
that are based on protocol-specific forwarding tables that are stored locally in the ASIC. On the
egress side, the UPC remaps the unified internal format to the format that is supported by the
egress medium and Layer 2 protocol and transmits the packet.
Each external-facing 10-Gb/s interface on a UPC can be wired to serve as two Fibre Channel
interfaces at 1, 2, and 4 Gb/s for an expansion module. Therefore, a single UPC can connect up
to eight Fibre Channel interfaces through expansion modules.
2012 Cisco Systems, Inc.
1-133
UPC
UPC
Expansion
Module
UPC
CPU
Intel LV Xeon
1.66 GHz
South
Bridge
DRAM
NVRAM
PCIe Bus
UCF
Flash
Serial
SERDES
UPC
....
Console
UPC
NIC
NIC
XAUI
Mgmt 0
SFP SFP SFP SFP
SFP SFP
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-36
The figure represents the architectural diagram of the Cisco Nexus 5000 Series Switches
control plane.
In the control plane, the Cisco Nexus 5000 Series Switch runs the Cisco NX-OS Software on a
single-core 1.66-GHz Intel LV Xeon CPU with 2 GB of DRAM. The supervisor complex is
connected to the data plane in-band through two internal ports running 1-Gb/s Ethernet.
The system is managed in-band, through the out-of-band 10-, 100-, and 1000-Mb/s
management port or via the serial console port.
The control plane is responsible for managing all control traffic. Data frames bypass the control
plane and are managed by the UCF and the UPCs respectively. Bridge protocol data units
(BPDUs), fabric login (FLOGI) frames and other control protocol-related frames are managed
by the control plane supervisor. Additionally, the control plane is responsible for running the
Cisco NX-OS Software.
The control plane consists of the following:
1-134
Serializer/deserializer (SERDES)
Intel South Bridge ASIC that controls memory, flash, and serial console access
Memory
2-MB BIOS
2-MB of NVRAM
Layer 2
Protocols
Layer 3
Protocols
VLAN
UDLD
OSPF
GLBP
PVLAN
CDP
BGP
HSRP
STP
802.1X
EIGRP
IGMP
LACP
.1AE
PIM
SNMP
Fabric
Control
Plane
Interface
Forwarding Engine
Control Plane Policing
Allow X
I/O Module
Supervisor
Forwarding Engine
Control Plane Policing
Allow X
I/O Module
Forwarding Engine
Control Plane Policing
Allow X
I/O Module
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-38
CoPP protects the control plane and separates it from the data plane.
The Cisco NX-OS device provides CoPP to prevent denial of service (DoS) attacks from
impacting performance. Such attacks, which can be perpetrated either inadvertently or
maliciously, typically involve high rates of traffic targeting the route processor itself.
The supervisor module divides the traffic that it manages into three functional components or
planes:
Control plane: Manages all routing protocol control traffic. These packets are destined to
router addresses and are called control plane packets.
Management plane: Runs the components that are meant for Cisco NX-OS device
management purposes such as the CLI and SNMP.
The supervisor module has both the management plane and control plane and is critical to the
operation of the network. Any disruption or attacks to the supervisor module may result in
serious network outages.
1-135
The level of protection can be set by choosing one of the CoPP policy
options from the initial setup.
- Strict, moderate, lenient, or none
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-39
Strict: This policy is 1 rate and 2 color and has a committed burst size (Bc) value of 250
microsec (except for the important class, which has a Bc value of 1000 microsec).
Moderate: This policy is 1 rate and 2 color and has a Bc value of 310 microsec (except for
the important class, which has a Bc value of 1250 microsec). These values are 25 percent
greater than the strict policy.
Lenient: This policy is 1 rate and 2 color and has a Bc value of 375 microsec (except for
the important class, which has a Bc value of 1500 microsec). These values are 50 percent
greater than the strict policy.
The Cisco NX-OS device hardware performs CoPP on a per-forwarding-engine basis. CoPP
does not support distributed policing. Therefore, you should choose rates so that the aggregate
traffic does not overwhelm the supervisor module.
1-136
2012Ciscoand/oritsaffiliates.Allrightsreserved.
port-channel
san-port-channel
vfc
vlan
DCICTv1.01-41
For CLI help, pressing the ? key displays full parser help with accompanying descriptions.
Pressing the Tab key displays a brief list of the commands that are available at the current
branch, but there are no accompanying descriptions.
1-137
N5K-A# conf
Enter configuration commands, one per line.
End with CNTL/Z.
N5K-A(config)# vrf context ?
WORD
VRF name (Max Size 32)
management (no abbrev)
Configurable VRF name
N5K-A(config)# vrf context man<tab>
N5K-A(config)# vrf context management
N5K-A(config-vrf)#
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-42
Pressing the Tab key provides command completion to a unique but partially typed command
string. In the Cisco NX-OS Software, command completion may also be used to complete a
system-defined name and, in some cases, user-defined names.
In the figure, the Tab key is used to complete the name of a VRF on the command line.
1-138
The Ctrl-Z key combination exits configuration mode but retains the user
input on the command line.
N5K-A(config-if)# show <CTRL-Z>
N5K-A# show
The where command displays the current context and login credentials.
N5K-A(config-if)# where
conf; interface Ethernet1/19
admin@N5K-A
N5K-A(config-if)#
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-43
The Ctrl-Z key combination lets you exit configuration mode but retains the user input on the
command line.
The where command displays the current configuration context and login credentials of the
user.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Redirect it to a file
Redirect it to a file in append mode
Display aaa configuration
Show running config for aclmgr
Display adjmgr information
Current operating configuration with defaults
Display arp information
Original ID to Translated ID Association
Display callhome configuration
Display cdp configuration
Display certificates configuration
Display cfs configurations
Show running config for copp
Display diagnostic information
Show the difference between running and startup
configuration
Exclude running configuration of specified features
Hide config for offline pre-provisioned interfaces
Expand port profile
Display fcoe_mgr configuration
Display icmpv6 information
Display igmp information
DCICTv1.01-44
The show running-config command displays the current running configuration of the switch.
If you need to view a specific features portion of the configuration, you can do so by running
the command with the feature specified.
2012 Cisco Systems, Inc.
1-139
diff
egrep
grep
head
human
last
less
no-more
section
Show lines that include the pattern as well as the subsequent lines
that are more indented than matching line
sort
Stream Sorter
tr
uniq
wc
xml
begin
count
end
exclude
include
-- More -2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-45
The pipe parameter can be placed at the end of all CLI commands to qualify, redirect, or filter
the output of the command. Many advanced pipe options are available including the following:
Get regular expression (grep) and extended get regular expression (egrep)
less
no-more
1-140
ignore-case
invert-match
line-exp
line-number
next
prev
word-exp
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Priority
24577
Address
64a0.e743.03c2
Cost
Port
129 (Ethernet1/1)
DCICTv1.01-46
The egrep parameter allows you to search for a word or expression and print a word count. It
also has other options.
In the figure, the show spanning-tree | egrep ignore-case next 4 rstp command is used. The
show spanning-tree command output is piped to egrep, where the qualifiers indicate to display
the lines that match the regular expression rstp, and to not be case sensitive. When a match is
found, the next 4 lines following the match of rstp are also to be displayed.
1-141
Defaults in brackets.
<return>
d or ctrl-D
q or Q or <interrupt>
b or ctrl-B
'
=
/<regular expression>
!<cmd> or :!<cmd>
ctrl-L
Redraw screen
:n
:p
:f
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-47
When a command output exceeds the length of the screen, the --More-- prompt is displayed.
When a --More-- prompt is presented, press h to see a list of available options.
To break out of the more command, press the letter Q.
1-142
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-48
The figure shows the output of the show running-config ipqos command. This command
displays only the portion of the running-config that relates to the ipqos configuration.
DCICTv1.01-49
The figure shows the output of the show running-config ipqos all command. The addition of
the all parameter to this command now displays the portion of the running-config that relates to
the ipqos configuration. In addition, default values that are not normally visible in the
configuration appear.
1-143
The all parameter can be used on the show running-config command with or without another
parameter. For example, the show running-config all command would display the entire
running-configuration with all default values included.
loopback
port-channel
fc
mgmt
san-port-channel
vfc
loopback
port-channel
vfc
fc
mgmt
san-port-channel
vlan
N5K-A(config)#
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-50
The following types of interfaces are supported via the main chassis of the switch:
Physical: 1 and 10 Gigabit Ethernet, and 1-, 2-, 4-, and 8 Gb Fibre Channel via expansion
modules. Additional interface types may be present depending on the FEX model that is
connected.
Logical: Ethernet and SAN port channels, SVI, virtual Fibre Channel (vFC), and tunnels.
Because of the feature command on the Cisco Nexus platform, interfaces, protocols, and their
related configuration contexts are not visible until the feature is enabled on the switch.
In the figure, the show running-config | include feature command is used to determine the
features that are currently enabled. In the default configuration, only feature telnet and feature
lldp are enabled. Using the interface command, followed by pressing the Tab key for help,
only ethernet, mgmt, and ethernet port channel interface types appear in the Cisco NX-OS
Software command output.
Enabling feature interface-vlan and repeating the interface command, SVI interface type has
appeared as a valid interface option. Similarly, enabling feature fcoe makes available all Fibre
Channel interface typesFibre Channel, SAN Port Channel, and vFC.
1-144
MTU
1500
-------------------------------------------------------------------------------Ethernet
VLAN
Type Mode
Status
Reason
Speed
t
Interface
#
-------------------------------------------------------------------------------Eth2/1
1
eth access down
Link not connected
auto(D)
Eth2/2
1
eth access down
Link not connected
auto(D)
Eth2/3
1
eth access down
Link not connected
auto(D)
Eth2/4
1
eth access down
Link not connected
auto(D)
Eth2/5
1
eth access up
none
10G(D)
Eth2/6
1
eth access down
Link not connected
auto(D)
Eth2/7
1
eth access up
none
10G(D)
Eth2/8
1
eth access up
none
10G(D)
Eth2/9
1
eth access down
SFP not inserted
auto(D)
Eth2/10
1
eth access down
SFP not inserted
auto(D)
Eth2/11
1
eth access up
none
10G(D)
Eth2/12
1
eth access up
none
10G(D)
Eth3/1
-eth routed down
Administratively down
auto(S)
Eth3/3
-eth routed down
Administratively down
auto(S)
Eth3/5
-eth routed down
SFP not inserted
auto(S)
---more---
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Por
Ch
--------------
DCICTv1.01-51
The show interface brief command lists the most relevant interface parameters for all
interfaces on a Cisco Nexus Series switch. For interfaces that are down, it also lists the reason
that the interface is down. Information about speed and rate mode (dedicated or shared) is also
displayed.
1-145
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-52
Management: Management
All Ethernet interfaces are named Ethernet. There is no differentiation in the naming
convention for the different speeds.
The show interface command displays the operational state of any interface, including the
reason why that interface might be down.
1-146
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-53
Interfaces on the Cisco Nexus Series switches have many default settings that are not visible in
the show running-config command. The default configuration parameters for any interface
may be displayed using the show running-config interface all command.
1-147
Model
Status
32
active *
ok
Mod
---
Sw
--------------
5.1(3)N1(1)
1.0
20:01:54:7f:ee:5c:6e:c0 to 20:04:54:7f:ee:5c:6e:c0
5.1(3)N1(1)
1.0
--
Mod
---
MAC-Address(es)
--------------------------------------
547f.ee5c.6ea8 to 547f.ee5c.6ec7
FOC154849HZ
0000.0000.0000 to 0000.0000.000f
FOC15475JZJ
Hw
------
N55-DL2
World-Wide-Name(s) (WWN)
--------------------------------------------------
Serial-Num
----------
N5K-A#
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-54
The show module command provides a hardware inventory of the interfaces and module that
are installed in the chassis. The main system is referred to as Module 1 on the Cisco Nexus
5000 Series Switch. Depending on the model of the switch, the number of ports that are
associated with this module will vary. This number is visible under the Module-Type column.
Any expansion modules are also be listed. The number and type of expansion modules vary by
system. When FEXs are attached, each FEX is also considered an expansion module of the
main system.
Modules that are inserted into the expansion module slots will be numbered according to which
slot they are inserted in. In the figure, the 8-port Fibre Channel Module is installed in slot
number 2.
Fabric extender slot numbers are assigned by the administrator during FEX configuration.
1-148
Sw
-------------6.0(2)
6.0(2)
6.0(2)
6.0(2)
Hw
-----1.2
1.3
2.1
1.8
---output removed---
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-55
The show module command on the Cisco Nexus 7000 Series Switches identifies which
modules are inserted and operational.
1-149
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-56
By default, the device outputs messages to terminal sessions and to a log file.
You can display or clear messages in the log file. Use the following commands:
1-150
The show logging last number-lines command displays the last number of lines in the
logging file.
The show logging logfile [start-time yyyy mmm dd hh:mm:ss] [end-time yyyy mmm dd
hh:mm:ss] command displays the messages in the log file that have a time stamp within
the span entered.
The clear logging logfile command clears the contents of the log file.
The most recent 100 system messages that are priority 0, 1, or 2 are logged into
NVRAM on the supervisor module.
After a switch reboots, these messages can be displayed.
Nexus-7010-PROD-A# show logging nvram
2011 Dec 20 19:40:09 N7K-1-pod1 %$ VDC-2 %$ %STP-2-ROOTGUARD_CONFIG_CHANGE:
Root
guard enabled on port Ethernet1/1.
2011 Dec 20 19:40:09 N7K-1-pod1 %$ VDC-2 %$ %STP-2-ROOTGUARD_CONFIG_CHANGE:
Root
guard enabled on port Ethernet1/3.
2011 Dec 20 19:42:56 N7K-1-pod1 %$ VDC-2 %$ %STP-2-ROOTGUARD_CONFIG_CHANGE:
Root
guard disabled on port Ethernet1/1.
2011 Dec 20 19:42:56 N7K-1-pod1 %$ VDC-2 %$ %STP-2-ROOTGUARD_CONFIG_CHANGE:
Root
guard disabled on port Ethernet1/3.
2011 Dec 21 10:10:00 N7K-1-pod1 %$ VDC-2 %$ %STP-2VPC_PEERSWITCH_CONFIG_ENABLED
: vPC peer-switch configuration is enabled. Please make sure to configure
spanni
ng tree "bridge" priority as per recommended guidelines to make vPC peerswitch
operational.
---output removed---
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-57
To view the NVRAM logs, use the show logging nvram [last number-lines] command.
The clear logging nvram command clears the logged messages in NVRAM.
1-151
Summary
This topic summarizes the key points that were discussed in this lesson.
Initial connectivity to the Cisco Nexus Series switches is via the console
port.
At the console port, the administrator can perform an initial configuration
which includes network connectivity parameters so that a remote
connection can be created from a remote PC.
The Cisco Nexus 7000 Series Switches have an additional management
interface called the Connectivity Management Processor.
The Cisco Nexus Series switches put the management port into a
management VRF by default, with SSH enabled and Telnet disabled in
the default settings.
The Cisco Nexus Series switch provides ISSU capabilities for seamless
upgrades of the operating system.
The Cisco Nexus Series switches use VLANs for separating traffic at a
Layer 2 level.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-58
2012Ciscoand/oritsaffiliates.Allrightsreserved.
1-152
DCICTv1.01-59
Lesson 5
Objectives
Upon completing this lesson, you will be able to describe vPCs and Cisco Fabric Path and how
to verify their operation on Cisco Nexus 7000 and 5000 Series Switches. You will be able to
meet these objectives:
Describe vPCs
Verify vPCs on the Cisco Nexus 7000 and 5000 Series Switches
Describe Cisco FabricPath on the Cisco Nexus 7000 Series Switches and Nexus 5500
Platform switches
Verify Cisco FabricPath on the Cisco Nexus 7000 Series Switches and 5500 Platform
switches
Secondary
Root
Primary
Root
Without vPC
- STP blocks redundant uplinks.
- VLAN-based load balancing is
used.
- Loop resolution relies on STP.
- Protocol failure can cause
complete network meltdown.
With vPC
- No blocked uplinks
- Lower oversubscription
- Hash-based EtherChannel load
balancing
- Loop-free topology
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-4
Virtualization technologies such as VMware ESX Server and clustering solutions such as
Microsoft Cluster Service currently require Layer 2 Ethernet connectivity to function properly.
Use of virtualization technologies in data centers, as well as across data center locations, leads
organizations to shift from a highly-scalable Layer 3 network model to a highly-scalable Layer
2 model. This shift is causing changes in the technologies that are used to manage large Layer 2
network environments. These changes include migration away from STP as a primary loop
management technology toward new technologies such as vPC and Cisco FabricPath.
In early Layer 2 Ethernet network environments, it was necessary to develop protocol and
control mechanisms to limit the disastrous effects of a topology loop in the network. STP was
the primary solution to this problem by providing a loop detection and loop management
capability for Layer 2 Ethernet networks. This protocol has gone through a number of
enhancements and extensions. While STP scales to very large network environments, it still has
one suboptimal principle. This principle is that to break loops in a network, only one active
path is allowed from one device to another. This principle is true regardless of how many actual
connections might exist in the network.
An early enhancement to Layer 2 Ethernet networks was port-channel technology. This
enhancement meant that multiple links between two participating devices could use all the links
between the devices to forward traffic. Traffic is forwarded by using a load-balancing
algorithm that equally balances traffic across the available interswitch links (ISLs). At the same
time, the algorithm manages the loop problem by bundling the links as one logical link. This
logical construct keeps the remote device from forwarding broadcast and unicast frames back to
the logical link, breaking the loop that actually exists in the network. Port-channel technology
1-154
has another primary benefit. It can potentially manage a link loss in the bundle in less than a
second, with very little loss of traffic and no effect on the active spanning-tree topology.
The biggest limitation in classic port-channel communication is that the port channel operates
only between two devices. In large networks, the support of multiple devices together is often a
design requirement to provide some form of hardware failure alternate path. This alternate path
is often connected in a way that would cause a loop, limiting the benefits that are gained with
port channel technology to a single path. To address this limitation, the Cisco NX-OS Software
platform provides a technology called vPC. A pair of switches acting as a vPC peer endpoint
looks like a single logical entity to port-channel-attached devices. However, the two devices
that act as the logical port channel endpoint are still two separate devices. This environment
combines the benefits of hardware redundancy with the benefits of port channel loop
management. The other main benefit of migration to an all-port-channel-based loop
management mechanism is that link recovery is potentially much faster. STP can recover from
a link failure in approximately 6 seconds, while an all-port-channel-based solution has the
potential for failure recovery in less than a second.
1-155
vPC
Domain 1
Maximum 16 Ports
- Access to aggregation
vPC
Domain 2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-5
vPCs are supported on both the Cisco Nexus 7000 and Cisco Nexus 5000 Series Switches. The
benefits that are provided by the vPC technology apply to any Layer 2 switched domain.
Therefore, a vPC is commonly deployed in both the aggregation and access layers of the data
center.
A vPC can be used to create a loop-free logical topology between the access and aggregation
layer switches, which increases the bisectional bandwidth and improves network stability and
convergence. It can also be used between servers and the access layer switches to enable server
dual-homing with dual-active connections.
When the switches in the access and aggregation layers both support the vPC, a unique 16-way
port channel can be created between the two layers. This scenario is commonly referred to as a
dual-sided vPC. This design provides up to 160 Gb/s of bandwidth from a pair of access
switches to the aggregation layer.
Note
If Cisco Nexus 7000 Series switches with F1-series modules are used on both sides of a
dual-sided vPC, a 32-way port channel can be created to support up to 320 Gb/s of
bandwidth between the access and aggregation layers.
1-156
vPC Peer
Keepalive Link
vPC
Peer
Layer 3
Cloud
vPC Domain
Peer
Link
Orphan
Port
CFS
vPC
Orphan
Device
vPC Member
Port
Normal
Port Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-6
A pair of Cisco Nexus switches that uses the vPC appears to other network devices as a single
logical Layer 2 switch. However, the two switches remain two separately managed switches
with an independent management and control plane. The vPC architecture includes
modifications to the data plane of the switches to ensure optimal packet forwarding. It also
includes control plane components to exchange state information between the switches and
allow the two switches to appear as a single logical Layer 2 switch to downstream devices.
The vPC architecture consists of the following components:
vPC peers: The core of the vPC architecture is a pair of Cisco Nexus switches. This pair of
switches acts as a single logical switch, which allows other devices to connect to the two
chassis using a Multichassis EtherChannel (MEC).
vPC peer link: The vPC peer link is the most important connectivity element in the vPC
system. This link creates the illusion of a single control plane. It does so by forwarding
bridge protocol data units (BPDUs) and Link Aggregation Control Protocol (LACP)
packets to the primary vPC switch from the secondary vPC switch. The peer link is also
used to synchronize MAC address tables between the vPC peers and to synchronize
Internet Group Management Protocol (IGMP) entries for IGMP snooping. The peer link
provides the necessary transport for multicast traffic and for the traffic of orphaned ports. If
a vPC device is also a Layer 3 switch, the peer link also carries Hot Standby Router
Protocol (HSRP) packets.
Cisco Fabric Services: The Cisco Fabric Services protocol is a reliable messaging protocol
that is designed to support rapid stateful configuration message passing and
synchronization. The vPC peers use the Cisco Fabric Services protocol to synchronize data
plane information and implement necessary configuration checks. vPC peers must
synchronize the Layer 2 Forwarding table between the vPC peers. This way, if one vPC
peer learns a new MAC address, that MAC address is also programmed on the Layer 2
Forwarding (L2F) table of the other peer device. The Cisco Fabric Services protocol travels
on the peer link and does not require any configuration by the user. To help ensure that the
peer link communication for the Cisco Fabric Services over Ethernet (FSoE) protocol is
always available, spanning tree has been modified to keep the peer-link ports always
1-157
forwarding. The Cisco FSoE protocol is also used to perform compatibility checks to
validate the compatibility of vPC member ports to form the channel, to synchronize the
IGMP snooping status, to monitor the status of the vPC member ports, and to synchronize
the Address Resolution Protocol (ARP) table.
vPC Peer
Keepalive Link
vPC
Peer
Layer 3
Cloud
vPC Domain
Peer
Link
Orphan
Port
CFS
vPC
Orphan
Device
vPC Member
Port
Normal
Port Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
1-158
DCICTv1.01-7
vPC peer keepalive link: The peer keepalive link is a logical link that often runs over an
out-of-band (OOB) network. It provides a Layer 3 communications path that is used as a
secondary test to determine whether the remote peer is operating properly. No data or
synchronization traffic is sent over the vPC peer keepalive link, just IP packets that indicate
that the originating switch is operating and running vPC. The peer keepalive status is used
to determine the status of the vPC peer when the vPC peer link goes down. In this scenario,
it helps the vPC switch determine whether the peer link itself has failed, or if the vPC peer
has failed entirely.
vPC: A vPC is a MEC, a Layer 2 port channel that spans the two vPC peer switches. The
downstream device that is connected on the vPC sees the vPC peer switches as a single
logical switch. The downstream device does not need to support the vPC itself. It connects
to the vPC peer switches using a regular port channel, which can either be statically
configured or negotiated through LACP.
vPC member port: This port is a port that is on one of the vPC peers that is a member of
one of the vPCs that is configured on the vPC peers.
vPC Peer
Keepalive Link
vPC
Peer
Layer 3
Cloud
vPC Domain
Peer
Link
Orphan
Port
CFS
vPC
Orphan
Device
vPC Member
Port
Normal
Port Channel
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-8
vPC domain: The vPC domain includes both vPC peer devices, a vPC peer keepalive link,
a vPC peer link, and all port channels in the vPC domain that are connected to the
downstream devices. A numerical vPC domain ID identifies the vPC. You can have only
one vPC domain ID on each device.
Orphan device: The term orphan device refers to any device that is connected to a vPC
domain using regular links instead of connecting through a vPC.
Orphan port: The term orphan port refers to a switch port that is connected to an orphan
device. The term is also used for vPC ports whose members are all connected to a single
vPC peer. This situation can occur if a device that is connected to a vPC loses all its
connections to one of the vPC peers.
1-159
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-9
vPC technology has been designed to limit the use of the peer link specifically to switch
management traffic and the occasional traffic flow from a failed network port. The peer link
does not carry regular traffic for vPCs. This feature allows the vPC solution to scale, because
the bandwidth requirements for the peer link are not directly related to the total bandwidth
requirements of the vPCs.
The vPC peer link is designed to carry several types of traffic. To start with, it carries vPC
control traffic, such as Cisco FSoE, BPDUs, and LACP messages. In addition, it carries traffic
that needs to be flooded, such as broadcast, multicast, and unknown unicast traffic. It also
carries traffic for orphan ports.
The term orphan port is used for two types of ports. An orphan port refers to any Layer 2 port
on a vPC peer switch that does not participate in vPC. These ports use normal switch
forwarding rules. Traffic from these ports can use the vPC peer link as a transit link to reach
orphan devices that are connected to the other vPC peer switch. An orphan port can also refer
to a port that is a member of a vPC, but for which the peer switch has lost all the associated
vPC member ports. When a vPC peer switch loses all member ports for a specific vPC, it will
forward traffic that is destined for that vPC to the vPC peer link. In this special case, the vPC
peer switch will be allowed to forward the traffic that is received on the peer link to one of the
remaining active vPC member ports.
To implement the specific vPC forwarding behavior, it is necessary to synchronize the L2F
tables between the vPC peer switches through Cisco Fabric Services instead of depending on
the regular MAC address learning. Cisco Fabric Services-based MAC address learning only
applies to vPC ports and is not used for ports that are not in a vPC.
One of the most important forwarding rules for vPC is that a frame that enters the vPC peer
switch from the peer link cannot exit the switch from a vPC member port. This principle
prevents frames that are received on a vPC from being flooded back onto the same vPC by the
other peer switch. The exception to this rule is traffic that is destined for an orphaned vPC
member port.
1-160
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-10
Cisco Fabric Services is used as the primary control plane protocol for vPC. It performs several
functions:
vPC peers must synchronize the Layer 2 MAC address table between the vPC peers. For
example, one vPC peer learns a new MAC address on a vPC. That MAC address is also
programmed on the L2F table of the other peer device for that same vPC. This MAC
address learning mechanism replaces the regular switch MAC address learning mechanism
and prevents traffic from being forwarded across the vPC peer link unnecessarily.
Cisco Fabric Services is used to track vPC status on the peer. When all vPC member ports
on one of the vPC peer switches go down, Cisco Fabric Services is used to notify the vPC
peer switch that its ports have become orphan ports. All traffic that is received on the peer
link for that vPC should now be forwarded via the local vPC.
Starting from Cisco NX-OS Software Release 5.0(2) for the Cisco Nexus 5000 Series
Switches and Cisco NX-OS Software version 4.2(6) for the Cisco Nexus 7000 Series
Switches, Layer 3 vPC peers synchronize their respective ARP tables. This feature is
transparently enabled and helps ensure faster convergence time upon reload of a vPC
1-161
switch. When two switches are reconnected after a failure, they use Cisco Fabric Services
to perform bulk synchronization of the ARP table.
Between the pair of vPC peer switches, an election is held to determine a primary and
secondary vPC device. This election is nonpreemptive. The vPC primary or secondary role is
primarily a control plane role. This role determines which of the two switches will primarily be
responsible for the generation and processing of spanning-tree BPDUs for the vPCs.
Note
Starting from Cisco NX-OS Software Release 5.0(2) for the Cisco Nexus 5000 Series
Switches and Cisco NX-OS Software Release 4.2(6) for the Cisco Nexus 7000 Series
Switches, the vPC peer-switch option can be implemented. This option allows both the
primary and secondary to generate BPDUs for vPCs independently. The two switches will
use the same spanning-tree bridge ID to ensure that devices that are connected on a vPC
still see the vPC peers as a single logical switch.
Both switches actively participate in traffic forwarding for the vPCs. However, the primary and
secondary roles are also important in certain failure scenarios, most notably in a peer link
failure. When the vPC peer link fails, the vPC peer switches attempt to determine through the
peer keepalive mechanism if the peer switch is still operational. If it is operational, the
operational secondary switch suspends all vPC member ports. The secondary also shuts down
all switch virtual interfaces (SVIs) that are associated with any VLANs that are configured as
allowed VLANs for the vPC peer link.
For LACP and STP, the two vPC peer switches present themselves as a single logical switch to
devices that are connected on a vPC. For LACP, this appearance is accomplished by generating
the LACP system ID from a reserved pool of MAC addresses, which are combined with the
vPC domain ID. For STP, the behavior depends on the use of the peer-switch option. If the
peer-switch option is not used, the vPC primary is responsible for generating and processing
BPDUs and uses its own bridge ID for the BPDUs. The secondary switch relays BPDU
messages, but does not generate BPDUs itself for the vPCs. When the peer-switch option is
used, both the primary and secondary switches send and process BPDUs. However, they will
use the same bridge ID to present themselves as a single switch to devices that are connected on
a vPC.
1-162
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-11
Only 10 Gigabit Ethernet ports can be used for the vPC peer link. It is recommended that
you use at least two 10 Gigabit Ethernet ports in dedicated mode on two different I/O
modules.
vPC is a per-VDC function on the Cisco Nexus 7000 Series Switches. vPCs can be
configured in multiple virtual device contexts (VDCs), but the configuration is entirely
independent. A separate vPC peer link and vPC peer keepalive link is required for each of
the VDCs. vPC domains cannot be stretched across multiple VDCs on the same switch, and
all ports for a given vPC must be in the same VDC.
1-163
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-12
A vPC domain by definition consists of a pair of switches that is identified by a shared vPC
domain ID. It is not possible to add more than two switches or VDCs to a vPC domain.
Only one vPC domain ID can be configured on a single switch or VDC. It is not possible for a
switch or VDC to participate in more than one vPC domain.
A vPC is a Layer 2 port channel. The vPC technology does not support the configuration of
Layer 3 port channels. Dynamic routing from the vPC peers to routers connected on a vPC is
not supported. It is recommended that routing adjacencies be established on separate routed
links.
Static routing to First Hop Routing Protocol (FHRP) addresses is supported. The FHRP
enhancements for a vPC enable routing to a virtual FHRP address across a vPC.
A vPC can be used as a Layer 2 link to establish a routing adjacency between two external
routers. The routing restrictions for the vPC only apply to routing adjacencies between the vPC
peer switches and routers that are connected on a vPC.
1-164
Verifying vPCs
This topic explains how to verify vPCs on the Cisco Nexus 7000 and 5000 Series Switches.
: 103
: peer adjacency formed ok
: peer is alive
: success
: success
: success
: primary
: 2
: Enabled
: : : Enabled
: Disabled
Po72
up
success
success
1,113-114
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-14
Several commands can be used to verify the operation of vPC. The primary command to be
used in initial verification is the show vpc brief command. This command displays the
following operational information:
vPC domain ID
Peer-link status
Status of the individual vPCs that are configured on the switch (including the result of the
consistency checks)
1-165
Type
---1
1
1
1
Local Value
Peer Value
---------------------- ----------------------Rapid-PVST
Rapid-PVST
None
None
""
""
0
0
1
1
1
1
2
2
Disabled
Enabled
Normal, Disabled,
Disabled
Enabled
1
1
Disabled
Enabled
Normal, Disabled,
Disabled
Enabled
1
1
1,113-114
-
1,113-114
-
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-15
If the show vpc brief command displays failed consistency checks, you can use the show vpc
consistency-parameters command to find the specific parameters that caused the consistency
check to fail. The global option on this command allows you to verify the consistency of the
global parameters between the two peer switches. The vpc or interface option can be used to
verify consistency between the port channel configurations for vPC member ports.
After you enable the vPC feature and configure the peer link on both vPC peer devices, Cisco
Fabric Services messages provide a copy of the configuration on the local vPC peer device
configuration to the remote vPC peer device. The system then determines whether any of the
crucial configuration parameters differ on the two devices.
1-166
Type
---1
1
1
1
mode
Speed
Duplex
Port Mode
Native Vlan
MTU
vPC card type
Allowed VLANs
Local suspended VLANs
1
1
1
1
1
1
1
-
Local Value
Peer Value
---------------------- ----------------------Default
Default
None
None
Default
Default
[(7f9b,
[(7f9b,
0-23-4-ee-be-67, 8047, 0-23-4-ee-be-67, 8047,
0, 0), (8000,
0, 0), (8000,
54-7f-ee-5c-6e-fc, 46, 54-7f-ee-5c-6e-fc, 46,
0, 0)]
0, 0)]
active
active
10 Gb/s
10 Gb/s
full
full
trunk
trunk
1
1
1500
1500
Orion
Orion
1,113-114
1,113-114
-
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-16
The configuration parameters in this section must be configured identically on both devices of
the vPC peer link or the vPC moves into suspend mode. The devices automatically check for
compatibility for some of these parameters on the vPC interfaces. The per-interface parameters
must be consistent per interface, and the global parameters must be consistent globally. The
following parameters are checked:
Trunk mode per channel (including native VLAN, VLANs that are allowed on trunk, and
the tagging of native VLAN traffic)
STP mode
STP global settings, including Bridge Assurance setting, port type setting, and loop guard
settings
STP interface settings, including port type setting, loop guard, and root guard
1-167
Cisco FabricPath
Cisco FabricPath is a feature that can be enabled on the Cisco Nexus switches to provide
routing functionality at a Layer 2 level in the data center. This topic describes the Cisco
FabricPath feature on the Cisco Nexus 7000 Series Switches and Nexus 5500 Platform
switches.
5 Logical Links
S2
S1
2012Ciscoand/oritsaffiliates.Allrightsreserved.
S3
DCICTv1.01-18
To support the Layer 2 domain and high availability, switches are normally interconnected. The
STP then runs on the switches to create a tree-like structure that is loop-free. To provide a loopfree topology, spanning tree builds the tree and then blocks certain ports to ensure that traffic
cannot loop around the network endlessly.
This tree topology implies that certain links are unused, traffic does not necessarily take the
optimal path, and when a failure does occur, the convergence time is based around timers.
1-168
FabricPath
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-19
Cisco FabricPath is an innovative Cisco NX-OS feature that is designed to bring the stability
and performance of routing to Layer 2. It brings the benefits of Layer 3 routing to Layer 2
switched networks to build a highly resilient and scalable Layer 2 fabric.
Cisco FabricPath switching allows multipath networking at the Layer 2 level. The Cisco
FabricPath network still delivers packets on a best-effort basis (which is similar to the Classical
Ethernet network), but the Cisco FabricPath network can use multiple paths for Layer 2 traffic.
In a Cisco FabricPath network, you do not need to run the STP. Instead, you can use Cisco
FabricPath across data centers, some of which have only Layer 2 connectivity, with no need for
Layer 3 connectivity and IP configurations.
1-169
MAC
IF
e1/1
s8, e1/2
FabricPath
s3
e1/1
s8
e1/2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-20
Externally, a fabric looks like a single switch, yet internally there is a protocol that adds fabricside intelligence. This intelligence ties the elements of the Cisco FabricPath infrastructure
together. The protocol provides the following:
1-170
A single address lookup at the ingress edge that identifies the exit port across the fabric
ECMP
Multipathing (up to 16 links active between any 2 devices)
Traffic is redistributed across remaining links in case of failure, providing
fast convergence
FabricPath
s3
2012Ciscoand/oritsaffiliates.Allrightsreserved.
s8
DCICTv1.01-21
Because Equal-Cost Multipath (ECMP) can be used at the data plane, the network can use all
the links that are available between any two devices. The first-generation hardware supporting
Cisco FabricPath can perform 16-way ECMP, which, when combined with 16-port 10-Gb/s
port channels, represents a potential bandwidth of 2.56 Tb/s between switches.
1-171
STP BPDU
STP
2012Ciscoand/oritsaffiliates.Allrightsreserved.
STP BPDU
FabricPath IS-IS
FabricPath
DCICTv1.01-22
With Cisco FabricPath, you use the Layer 2 Intermediate System-to-Intermediate System (ISIS) protocol for a single control plane that functions for unicast, broadcast, and multicast
packets. There is no need to run STP. It is a purely Layer 2 domain. This Cisco FabricPath
Layer 2 IS-IS is a separate process than Layer 3 IS-IS.
IS-IS provides following benefits:
1-172
Easily extensible: Using custom type, length, value (TLV) settings, IS-IS devices can
exchange information about virtually anything
Provides Shortest Path First (SPF) routing: Excellent topology building and
reconvergence characteristics
Per-port MAC address table only needs to learn the peers that are
reached across the fabric.
A virtually unlimited number of hosts can be attached to the fabric.
MAC
IF
MAC
IF
e1/1
s1,e1/1
FabricPath
s8, e1/2
s3
e1/2
s8
e1/2
e1/1
s5
MAC
IF
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-23
With Cisco NX-OS Software Release 5.1(1) and the F-Series module, you can use
conversational MAC address learning. The MAC address learning is configurable, and can
either be conversational or traditional on a VLAN-by-VLAN basis.
Conversational MAC address learning means that each interface learns only those MAC
addresses for interested hosts, rather than all MAC addresses in the domain. Each interface
learns only those MAC addresses that are actively speaking with the interface. In effect,
conversational MAC address learning consists of a three-way handshake.
Conversational MAC address learning permits the scaling of the network beyond the limits of
individual switch MAC address tables.
All Cisco FabricPath VLANs use conversational MAC address learning.
Classical Ethernet VLANs use traditional MAC address learning by default. However, this
setting is configurable.
1-173
Ethernet
STP
Cisco FabricPath Interface
Interfaces connected to another Cisco FabricPath device
Send/receive traffic with Cisco FabricPath header
No spanning tree
No MAC learning
Exchange topology information through L2 IS-IS adjacency
Forwarding based on Switch ID Table
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Ethernet
FabricPath Header
Cisco
FabricPath
FabricPath interface
CE interface
DCICTv1.01-24
To interact with the Classic Ethernet network, you set VLANs to either Classic Ethernet or
Cisco FabricPath mode. The Classic Ethernet VLANs carry traffic from the Classic Ethernet
hosts to the Cisco FabricPath interfaces. The Cisco Fabric Path VLANs carry traffic throughout
the Cisco FabricPath topology. Only the active Cisco FabricPath VLANs that are configured on
a switch are advertised as part of the topology in the Layer 2 IS-IS messages.
The following interface modes carry traffic for the following types of VLANs:
Interfaces on the F Series modules that are configured as Cisco FabricPath interfaces can
carry traffic only for Cisco FabricPath VLANs
Interfaces on the F Series modules that are not configured as Cisco FabricPath interfaces
carry traffic for the following:
Interfaces on the M Series modules carry traffic only for classic Ethernet VLANs
To have a loop-free topology for the classic Ethernet and Cisco FabricPath hybrid network, the
Cisco FabricPath network automatically presents as a single bridge to all connected classic
Ethernet devices.
Other than configuring the STP priority on the Cisco FabricPath Layer 2 gateway switches, you
do not need to configure anything for the STP to work seamlessly with the Cisco FabricPath
network. Only connected classic Ethernet devices form a single STP domain. Those classic
Ethernet devices that are not interconnected form separate STP domains.
All classic Ethernet interfaces should be designated ports, which occur automatically, or they
will be pruned from the active STP topology. If the system does prune any port, the system
returns a syslog message. The system clears the port again only when that port is no longer
receiving superior BPDUs.
The Cisco FabricPath Layer 2 gateway switch also propagates the topology change
notifications (TCNs) on all its classic Ethernet interfaces.
1-174
The Cisco FabricPath Layer 2 gateway switches terminate STP. The set of Cisco FabricPath
Layer 2 gateway switches that are connected by STP forms the STP domain. Because there can
be many Cisco FabricPath Layer 2 gateway switches that are attached to a single Cisco
FabricPath network, there may also be many separate STP domains.
1-175
To verify that local and remote MAC addresses are learned for the Cisco
FabricPath VLANs, use the show mac address table command.
Nexus-7010-PROD-B# show mac address-table dynamic vlan 10
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN
MAC Address
Type
age
Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+-----------------* 10
0000.0000.0001
dynamic
0
F
F Eth1/15
* 10
0000.0000.0002
dynamic
0
F
F Eth1/15
* 10
0000.0000.0003
dynamic
0
F
F Eth1/15
* 10
0000.0000.0004
dynamic
0
F
F Eth1/15
* 10
0000.0000.0005
dynamic
0
F
F Eth1/15
Local
* 10
0000.0000.0006
dynamic
0
F
F Eth1/15
MAC
* 10
0000.0000.0007
dynamic
0
F
F Eth1/15
* 10
0000.0000.0008
dynamic
0
F
F Eth1/15
* 10
0000.0000.0009
dynamic
0
F
F Eth1/15
* 10
0000.0000.000a
dynamic
0
F
F Eth1/15
Remote
10
0000.0000.000b
dynamic
0
F
F 200.0.30
MAC
10
0000.0000.000c
dynamic
0
F
F 200.0.30
10
0000.0000.000d
dynamic
0
F
F 200.0.30
10
0000.0000.000e
dynamic
0
F
F 200.0.30
10
0000.0000.000f
dynamic
0
F
F 200.0.30
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-26
The show mac address table command can be used to verify that MAC addresses are learned
on Cisco FabricPath edge devices. The command shows local addresses with a pointer to the
interface that the address was learned on. For remote addresses, it provides a pointer to the
remote switch from which this address was learned.
1-176
To examine the Cisco FabricPath routes between the switches in the fabric,
use the show fabricpath route command.
Nexus-7010-PROD-A# show fabricpath route
FabricPath Unicast Route Table
'a/b/c' denotes ftag/switch-id/subswitch-id
'[x/y]' denotes [admin distance/metric]
ftag 0 is local ftag
subswitch-id 0 is default subswitch-id
Multipathing
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-27
The show fabricpath route command can be used to view the Cisco FabricPath routing table
that results from the Cisco FabricPath IS-IS SPF calculations. The Cisco FabricPath routing
table shows the best paths to all the switches in the fabric. If multiple equal paths are available
between two switches, all paths will be installed in the Cisco FabricPath routing table to
provide ECMP.
1-177
Summary
This topic summarizes the key points that were discussed in this lesson.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
1-178
DCICTv1.01-28
Lesson 6
Objectives
Upon completing this lesson, you will be able to describe OTV as a method of data center
interconnect (DCI). You will be able to meet these objectives:
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-4
Businesses face the challenge of providing very high availability for applications while keeping
operating expenses (OpEx) low. Applications must be available anytime and anywhere with
optimal response times.
The deployment of geographically dispersed data centers allows IT architects to put in place
effective disaster-avoidance and disaster-recovery mechanisms that increase the availability of
the applications. Geographic dispersion also enables optimization of application response time
through improved facility placement. Flexible mobility of workloads across data centers helps
avoid demand hotspots and more fully utilizes available capacity.
To enable all the benefits of geographically dispersed data centers, the network must extend
Layer 2 connectivity across the diverse locations. This connectivity must be provided without
compromising the autonomy of data centers or the stability of the overall network.
1-180
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-5
Enabling key technologies such as server clustering and workload mobility across data centers
requires Layer 2 connectivity between data centers.
Traditionally, several different technologies have been used to provide Layer 2 DCIs:
Virtual Private LAN Services (VPLS): Similar to EoMPLS, VPLS uses an MPLS
network as the underlying transport network. However, instead of point-to-point Ethernet
pseudowires, VPLS delivers a virtual multiaccess Ethernet network.
Dark fiber: In some cases, dark fiber may be available to build private optical connections
between data centers. Dense wavelength-division multiplexing (DWDM) or coarse
wavelength-division multiplexing (CWDM) can increase the number of Layer 2
connections that can be run through the fibers. These technologies increase the total
bandwidth over the same number of fibers.
Although it is possible to build Layer 2 DCIs based on these technologies, they present a
number of challenges:
Complex operations: Traditional Layer 2 VPN technologies can provide extended Layer 2
connectivity across data centers. But these technologies usually involve a mix of complex
protocols, distributed provisioning, and an operationally intensive hierarchical scaling
model. The provisioning of many point-to-point connections or the complexity of the
underlying transport technologies can add significantly to the operational cost of these
types of DCIs. A simple overlay protocol with built-in capabilities and point-to-cloud
provisioning is crucial to reducing the cost of providing this connectivity.
1-181
solution for the interconnection of data centers must be transport agnostic. Such
independence gives the network architect the flexibility to choose any transport between
data centers that is based on business and operational preferences.
1-182
Bandwidth management: Traditional Layer 2 DCI technologies often do not allow the
concurrent use of redundant connections, because of the risk of Layer 2 loops. Balancing
the load across all available paths while providing resilient connectivity between the data
center and the transport network requires added intelligence. Traditional Ethernet switching
and Layer 2 VPN do not meet that level of intelligence.
Failure containment: The extension of Layer 2 domains across multiple data centers can
cause problems. Traditional Layer 2 extension technologies between multiple data centers
often extend the failure domain between the data centers that are causing issues. These
failures propagate freely over the open Layer 2 flood domain. A solution that provides
Layer 2 connectivity yet restricts the reach of the flood domain is needed. Such a solution
would contain failures and thus preserve the resiliency that is achieved by using multiple
data centers.
OTV
OTV
OTV
OTV
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-6
OTV overcomes the challenges that are inherent to traditional Layer 2 DCI technologies.
The name of the technology describes its key characteristics:
Transport: OTV provides a Layer 2 transport across a Layer 3 network. It can leverage all
the underlying capabilities of the underlying transport network, such as fast convergence,
load balancing, and multicast replication.
Virtualization: OTV provides a virtual multiaccess Layer 2 network that supports efficient
transport of unicast, multicast, and broadcast traffic. Sites can be added to an OTV overlay
without a need to provision additional point-to-point connections to the other sites. There
are no virtual circuits, pseudowires or other point-to-point connections to maintain between
the sites. Packets are routed independently from site to site without a need to establish
stateful connections between the sites.
1-183
Complex dual-homing
- Requires additional protocols
- STP extension is difficult to
manage
2012Ciscoand/oritsaffiliates.Allrightsreserved.
OTV
Control plane-based MAC
learning
- Contains failures by restricting the
reach of unknown unicast
flooding
Dynamic encapsulation
- Optimized multicast replication in
the core
DCICTv1.01-7
Traditional Layer 2 DCI technologies depend on flooding of unknown unicast, multicast, and
broadcast frames to learn the location of the MAC addresses on the network. OTV replaces this
mechanism with protocol-driven control plane learning. Flooding of unknown unicasts is no
longer necessary. Eliminating unknown unicast flooding helps in containing failures by limiting
the scope of the flooding domain.
Traditional Layer 2 DCI technologies use a mesh of point-to-point connections to connect
multiple sites. The configuration and maintenance of these tunnels limits the scalability of
Layer 2 DCI solutions. In addition, a point-to-point connection model leads to inefficient
multicast forwarding. Multicasts need to be replicated at the headend to be sent to the different
sites. OTV is a multipoint technology. It leverages the multipoint connectivity model of Layer
3 networks to easily add more sites to the OTV overlay without a need to provision point-topoint connections between the sites. If the Layer 3 network is multicast-enabled, OTV can
leverage the inherent multicast capabilities of the Layer 3 network to replicate multicast in the
core network, avoiding headend replication.
With traditional Layer 2 DCI technologies dual-homing, a site to the transport network is often
complex. Redundant connections introduce the possibility of Layer 2 loops. Managing these
loops requires use of protocols that ensure that the Layer 2 topology remains loop-free.
Extending Spanning Tree Protocol (STP) between the sites is one example. OTV has a native
dual-homing mechanism that allows traffic to be load-balanced to the transport network. By not
extending STP across the overlay, each site is allowed to remain an independent spanning-tree
domain.
1-184
Internal interfaces
- The internal interfaces are those interfaces on an edge device that face the
site and carry at least one of the VLANs that are extended through OTV.
- Internal interfaces are regular Layer 2 interfaces.
- No OTV configuration is required on internal interfaces.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-8
To understand the operation of OTV, it is important to establish some of the key terms and
concepts.
OTV is an edge function. Layer 2 traffic is received from the switched network. For all VLANs
that need to be extended to remote locations, the Ethernet frames are dynamically encapsulated
into IP packets that are then sent across the transport infrastructure. A device that performs the
OTV encapsulation and decapsulation functions between the Layer 2 network and the transport
network is an OTV edge device. OTV edge devices are responsible for all OTV functionality.
The OTV edge device can be in either the core or aggregation layer of the data center. A site
can have multiple edge devices to provide additional resiliency. This scenario is commonly
referred to as multihoming.
By definition, an OTV edge device has interfaces that connect to the transport network and
interfaces that connect to the Layer 2 switched network. The Layer 2 interfaces that receive the
traffic from the VLANs that are to be extended across OTV are named the internal interfaces.
These interfaces are regular Layer 2 interfaces, usually IEEE 802.1Q trunks. No OTV-specific
configuration is required on the internal interfaces.
1-185
Join interface:
- The join interface is one of the uplinks of the edge device.
- The join interface is a routed point-to-point link.
Can be a single routed port, a routed port channel, or a subinterface of a
routed port or port channel
- The join interface is used to join the overlay network.
Overlay interface:
- The overlay interface is a new virtual interface that contains all the OTV
configuration.
- The overlay interface is a logical multiaccess, multicast-capable interface.
- The overlay interface encapsulates the site Layer 2 frames in IP unicast or
multicast packets.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-9
The join interface is used to source the OTV encapsulated traffic and send it to the Layer 3
domain of the data center network. The join interface is a Layer 3 entity.
With the current release of the Cisco NX-OS Software, the join interface can only be defined as
a routed point-to-point physical or logical interface. It can be a physical routed port or
subinterface of a routed port. For additional resiliency, it can also be a Layer 3 port channel or
subinterface of a Layer 3 port channel. Currently the join interface cannot be any other type of
interface, such as a switch virtual interface (SVI) or loopback interface.
An OTV overlay can only have a single join interface per edge device. Multiple overlays can
share the same join interface.
The edge device uses the join interface for different purposes. The join interface is used to join
the overlay network and discover the other remote OTV edge devices. When multicast is
enabled in the transport infrastructure, an edge device joins specific multicast groups in order to
join the overlay network. These groups are available in the transport network and are dedicated
to carry control and data plane traffic. The join interface is used to form OTV adjacencies with
the other OTV edge devices belonging to the same overlay VPN. The join interface is used to
send and receive MAC reachability information and the join interface is also used to send and
receive the encapsulated Layer 2 traffic.
The Overlay interface is a logical multiaccess and multicast-capable interface that must be
explicitly defined by the user. The entire OTV overlay configuration is applied on this logical
interface. Every time the OTV edge device receives a Layer 2 frame that is destined for a
remote data center site, the frame is logically forwarded to the overlay interface. This behavior
causes the edge device to perform the dynamic OTV encapsulation on the Layer 2 frame and
send it to the join interface toward the routed domain.
1-186
OTV
OTV
Internal Interfaces
Join Interfaces
Edge Device
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-10
The figure illustrates the OTV components, including two edge devices with internal and join
interfaces labeled. It also shows a graphical representation of the OTV overlay and the overlay
interfaces, which are depicted as tunnels.
1-187
MAC TABLE
MAC TABLE
VLAN
MAC
IF
100
MAC 1
Eth 2
100
MAC 2
Eth 1
100
MAC 3
IP B
100
MAC 4
IP B
MAC
IF
100
MAC 1
IP A
100
MAC 2
IP A
100
MAC 3
Eth 3
100
MAC 4
Eth 4
OTV
IP A
OTV
VLAN
Transport
Infrastructure
OTV
OTV
OTV
MAC 1 MAC 3
MAC 1 MAC 3
IP B
IP A IP B
MAC 1 MAC 3
MAC 2
MAC 4
MAC 1
2012Ciscoand/oritsaffiliates.Allrightsreserved.
MAC 3
DCICTv1.01-11
In OTV data plane forwarding operations, control plane adjacencies are established between the
OTV edge devices in different sites and MAC address reachability information is exchanged.
Then the MAC address tables in the OTV edge devices contain two different types of entries.
The first type is the normal MAC address entry that points to a Layer 2 switch port. The second
type in the MAC address table includes entries that point to IP addresses of adjacent OTV
neighbors.
To forward frames that are based on the MAC entries that are installed by OTV the following
procedure is followed, which is illustrated in the figure:
1-188
Step 1
The Layer 2 frame is received at the OTV edge device. A traditional Layer 2 lookup
is performed. However, this time the information in the MAC address table for
MAC address MAC 3 does not point to a local Ethernet interface. Rather, it points to
the IP address of the remote OTV edge device that advertised the MAC reachability
information for MAC 3.
Step 2
The OTV edge device encapsulates the original Layer 2 frame. The source IP
address of the outer header is the IP address of its join interface. The destination IP
address is the IP address of the join interface of the remote edge device.
Step 3
The OTV encapsulated frame, which is a regular unicast IP packet, is carried across
the transport infrastructure and delivered to the remote OTV edge device.
Step 4
The remote OTV edge device de-encapsulates the IP packet exposing the original
Layer 2 frame.
Step 5
The edge device performs a Layer 2 lookup on the original Ethernet frame. It
discovers that it is reachable through a physical interface, which means it is a MAC
address local to the site.
Step 6
The frame is delivered to the destination host that MAC 3 belongs to through regular
Layer 2 Ethernet switching.
IS-IS is used by OTV as the control protocol between the edge devices.
- There is no need to configure or understand the operation of IS-IS.
MAC Address
Reachability
OTV
OTV
OTV
2012Ciscoand/oritsaffiliates.Allrightsreserved.
OTV
OTV
DCICTv1.01-12
Before any encapsulation and de-encapsulation of frames across the overlay can be performed,
it is necessary to exchange MAC address reachability information between the sites. This step
is required to create corresponding OTV MAC address table entries throughout the overlay.
OTV does not depend on flooding to propagate MAC address reachability information. Instead,
OTV uses a control plane protocol to distribute MAC address reachability information to
remote OTV edge devices. This protocol runs as an overlay control plane between OTV edge
devices. As a result, there is no dependency with the routing protocol used in the Layer 3
domain of the data center or in the transport infrastructure.
The OTV control plane is transparently enabled in the background after creating the OTV
overlay interface and does not require explicit configuration. While it is possible to tune
parameters, such as timers, for the OTV protocol, such tuning is more of an exception than a
common requirement.
Note
The routing protocol that is used to implement the OTV control plane is Intermediate
System-to-Intermediate System (IS-IS). It was selected because it is a standards-based
protocol, originally designed with the capability of carrying MAC address information in type,
length, value (TLV) triplets. Despite the fact that IS-IS is used, the control plane protocol will
be generically called OTV protocol. This use will differentiate it from IS-IS that is used as
an interior gateway protocol (IGP) for IP version 4 (IPv4) or IP version 6 (IPv6). It is not
necessary to have a working knowledge of IS-IS configuration to implement OTV. However,
some background in IS-IS can be helpful when you troubleshoot OTV.
1-189
OTV
OTV
OTV
STP Root
VLAN 10
STP Root
VLAN 10
OTV
BPDUs
2012Ciscoand/oritsaffiliates.Allrightsreserved.
OTV
BPDUs
DCICTv1.01-13
OTV by default does not transmit STP bridge protocol data units (BPDUs) across the overlay.
This native OTV function does not require the use of any explicit configuration, such as BPDU
filtering. This feature allows every site to remain an independent spanning-tree domain:
Spanning-tree root configuration, parameters, and the spanning-tree protocol flavor can be
decided on a per-site basis.
The separation of spanning-tree domains fundamentally limits the fate sharing between data
center sites. A spanning-tree problem in the control plane of a given site would not produce any
effect on the remote data centers.
1-190
Each OTV edge device maintains an ARP cache to reduce ARP traffic
on the overlay.
- Initial ARPs are flooded across the overlay to all edge devices using multicast.
- When the ARP response comes back, the IP to MAC mapping is snooped and
added to the ARP cache.
- Subsequent ARP requests for the same IP address are answered locally
based on the cached entry.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-14
Traditional Layer 2 switching relies on flooding of unknown unicasts for MAC address
learning. However, in OTV, this function is performed by the OTV control protocol. By default
OTV suppresses the flooding of unknown unicast frames across the overlay. An OTV edge
device behaves more like a router than a Layer 2 bridge. It forwards Layer 2 traffic across the
overlay only if it has previously received information on how to reach that remote MAC
destination. Unknown unicast suppression is enabled by default and does not need to be
configured.
Note
This property of OTV is important to minimize the effects of a server misbehaving and
generating streams that are directed to random MAC addresses. This type of behavior could
also occur as a result of a denial of service (DoS) attack.
The assumption in the behavior of OTV is that there are no silent or unidirectional devices in
the network. It is assumed that sooner or later an OTV edge device will learn the address of a
host and communicate it to the other edge devices through the OTV protocol. To support
specific applications, like the Microsoft Network Load Balancing (NLB) service, which
requires the flooding of Layer 2 traffic to function, a configuration knob will be provided in a
future Cisco NX-OS Software release to enable selective flooding. Statically defined MAC
addresses allow Layer 2 traffic that is destined to those MAC addresses to be flooded across the
overlay, or broadcast to all remote OTV edge devices, instead of being dropped. The
expectation is that this configuration will be required only in very specific corner cases, so that
the default behavior of dropping unknown unicast would be the usual operation model.
Another function that reduces the amount of traffic that is sent across the transport
infrastructure is Address Resolution Protocol (ARP) optimization. Each OTV edge device
inspects ARP traffic as it is passed between the internal interfaces and the overlay. Initially,
ARP requests received on internal interfaces are broadcast on the overlay to all other edge
devices. However, when the ARP response is received on the overlay, the edge router enters the
IP and MAC combination into an ARP cache. This is important when subsequent ARP requests
are received on an internal interface for an IP address that is present in the cache. In such cases,
1-191
the OTV edge device will respond to the ARP request on the internal interface. It will not
broadcast the request to the overlay again.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-15
One key function included in the OTV protocol is multihoming, where two or more OTV edge
devices provide LAN extension services to a given site. Consider this redundant node
deployment along with the fact that STP BPDUs are not sent across the OTV overlay. Such a
condition could potentially lead to the creation of a bridging loop between sites. To prevent this
loop, OTV has a built-in mechanism that ensures only one of the edge devices will forward
traffic for a given VLAN. The edge device that has the active forwarding role for the VLAN is
called an authoritative edge device (AED) for that VLAN.
The AED has two main tasks:
Forwarding Layer 2 unicast, multicast, and broadcast traffic between the site and the
overlay and vice versa
The AED role is negotiated, on a per-VLAN basis, between all the OTV edge devices
belonging to the same site. To decide which device should be elected as an AED for a given
site, the OTV edge devices establish an internal OTV control protocol peering.
The internal adjacency is established on a dedicated VLAN, named the site VLAN. The Site
VLAN should be carried on multiple Layer 2 paths internal to a given site, to increase the
resiliency of this internal adjacency. The internal adjacency is used to negotiate the AED role.
A deterministic algorithm is implemented to split the AED role for odd and even VLANs
between two OTV edge devices. More specifically, the edge device that is identified by a lower
system ID will become authoritative for all the even extended VLANs. The device with the
higher system ID will be authoritative for the odd extended VLANs. This behavior is hardwareenforced and cannot be tuned in the current Cisco NX-OS Software release.
1-192
:
System-ID
Dest Addr
64a0.e743.03c3 10.7.7.202
Up Time
00:53:03
UP
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-17
The show otv adjacency command can be used to verify whether the OTV control protocol
adjacencies have been properly established. The OTV adjacencies are formed using the OTV
control group for the overlay.
If adjacencies fail to establish, verify that the configured control group is the same on all the
overlay interfaces and that the overlay interface itself is operational.
1-193
Use the show otv overlay command to verify that the overlay interface
is enabled.
Nexus-7010-PROD-A# show otv overlay 1
OTV Overlay Information
Site Identifier 0000.0000.2010
Overlay interface Overlay1
VPN name
: Overlay1
VPN state
: UP
Extended vlans
: 10-12 (Total:3)
Control group
: 239.7.7.7
Data group range(s) : 232.7.7.0/24
Join interface(s)
: Eth3/1 (10.7.7.201)
Site vlan
: 13 (up)
AED-Capable
: Yes
Capability
: Multicast-Reachable
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-18
To verify that the overlay interface is operational and that the correct parameters have been
configured for the overlay interface, you can use the show otv overlay command. The primary
fields to observe in the output of this command are the VPN state and Control group fields.
The VPN state should be UP and the control group address should match on the local and
remote edge device.
1-194
Use the show otv route command to verify that MAC addresses are
properly learned and announced across the overlay.
MAC-Address
-------------547f.ee5c.6ea8
547f.ee5c.6efc
547f.ee5c.763c
547f.ee5c.76ea
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Metric Uptime
Owner
Next-hop(s)
------ -------- --------- ----------1
00:08:38 site
Ethernet2/5
1
00:06:30 site
Ethernet2/5
42
00:05:14 overlay
Nexus-7010-PROD-B
42
00:08:19 overlay
Nexus-7010-PROD-B
DCICTv1.01-19
Once you have verified that the adjacencies were established, you should verify that the MAC
addresses of the target hosts are properly advertised across the overlay. Use the show otv route
command to see all the MAC addresses that are learned for the extended VLANs. This
command shows both local addresses that were learned on the internal interfaces and remote
addresses that were learned through the overlay.
Tip
The show otv route command only shows MAC addresses for VLANs that are extended on
the overlay. Sometimes a MAC address is displayed in the show mac address-table
command for a VLAN on an internal interface, but the show otv route command does not
show the address. If this occurs, you should verify that the VLAN was added to the list of
extended VLANs for the overlay.
The OTV IS-IS database can be examined using the show otv isis database command.
1-195
To verify that packets are sent and received on the overlay, use the show
interface overlay command.
Nexus-7010-PROD-A# show interface overlay 1
Overlay1 is up
MTU 1400 bytes, BW 1000000 Kbit
Encapsulation OTV
Last link flapped 00:58:36
Last clearing of "show interface" counters never
Load-Interval is 5 minute (300 seconds)
RX
0 unicast packets
1473 multicast packets
1668206 bytes
4709 bits/sec
0 packets/sec
TX
0 unicast packets
0 multicast packets
0 bytes
0 bits/sec
0 packets/sec
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-20
One of the OTV-specific commands that can be helpful in establishing if packets are sent or
received on the overlay is the show interface overlay command.
1-196
Summary
This topic summarizes the key points that were discussed in this lesson.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.01-21
1-197
1-198
Module Summary
This topic summarizes the key points that were discussed in this module.
The functional layers of the Cisco data center infrastructure are the
access, aggregation, and core layers.
The Cisco Nexus Family of products ranges from the Cisco Nexus
1000V Series software-based switch through to the Cisco Nexus 7000
Series.
The Cisco MDS product range includes the Cisco MDS 9100 Series
Multilayer Fabric Switches to the Cisco MDS 9500 Series Multilayer
Directors for the SAN infrastructure.
Initial configuration and monitoring of the Cisco Nexus 7000 and 5000
Series Switches is performed through the CLI, with GUI support using
the Cisco Data Network Manager product.
vPCs and Cisco FabricPath are all features that are designed to
enhance the Layer 2 capabilities to provide maximum throughput and
high availability.
OTV is a feature for extending Layer 2 connectivity across any Layer 3
infrastructure between multiple data centers.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-1
The Cisco data center is broken down into functional layers. These layers consist of the access,
aggregation, and core. Inside the data center, there are often multiple networks including the
LAN and SAN networks. Each of these networks uses these functional layers to break down the
network logically. The access layer usually provides connectivity to devices such as servers and
users. The aggregation layer is usually the Layer 3 demarcation point and where the policies
and services are set. The core layer usually provides high-speed connectivity between multiple
aggregation layers.
The Cisco Nexus Series product range has been designed for the data center with the Cisco
NX-OS Software providing the full range of features that are required to ensure high
availability, performance, reliability, security, and manageability for these devices. The Cisco
Nexus Series includes the Cisco Nexus 1000V Series (a software-based switch for the
virtualized access layer), and the Cisco Nexus 5000 and 7000 Series Switches. To provide
additional ports without increasing management points, there is also the Cisco Nexus 2000
Fabric Extender. The Cisco Nexus 2000 Fabric Extender can be connected to either a Cisco
Nexus 5000 or 7000 Series switch as an external I/O module.
The Cisco MDS product range is a Fibre Channel switch that is designed for the SAN
infrastructure inside the data center. The Cisco MDS Series runs the Cisco NX-OS Software,
the same as the Cisco Nexus product range. This process provides consistency within the data
center for switch operating systems, making it easier from a management perspective because
there are fewer operating systems for administrators to learn. The Cisco MDS Series includes
the Cisco MDS 9100 Series through to the Cisco MDS 9500 Series. The Cisco MDS 9500
Series is a modular product and supports a Fibre Channel over Ethernet (FCoE) I/O module.
The FCoE module provides customers with the ability to consolidate I/O across 10-Gb/s
Ethernet ports.
1-199
The primary method of monitoring the Cisco Nexus 7000 and 5000 Series switches is through
the CLI. Initially the switch has no configuration, only a default administrative user called
admin. The first configuration that must be performed is to provide the admin user with an
administrative password. Once the administrative password has been set, the administrator
would then configure the basic parameters that are required to provide network connectivity.
To assist the administrator, there is a setup dialog script to walk the administrator through the
basic requirements.
Inside the data center, there will be Layer 2 and Layer 3 connectivity. At the access layer there
is often Layer 2 connectivity only, with Layer 3 being provided at the aggregation layer. With
any Layer 2 domain, normally the Spanning Tree Protocol (STP) is running. STP is designed to
create a loop-free tree-like topology. This principle means that inside the data center, although
multiple paths are provided, certain paths will be forwarding and others blocking. Features such
as virtual port channels (vPCs) and Cisco FabricPath help avoid this issue where full utilization
of the available bandwidth is not possible. vPCs allow a downstream device to be dual-homed
to two upstream switches that are connected through a vPC link using a multichassis port
channel. This multichassis port channel is known as a vPC on Cisco Nexus Series switch
devices. Cisco FabricPath provides a Layer 2 routing capability to replace STP.
As the virtualization layer grows, the requirement for extending Layer 2 between
geographically dispersed data centers is growing. When extending Layer 2 between data
centers, there are usually issues that need to be resolved such as the size of the Layer 2 fault
domain. There are several technologies available to provide this functionality, but they often
have specific requirements and can be difficult to manage. Overlay Transport Virtualization
(OTV) is a technology that allows the extension of Layer 2 but without the overheads and
issues of large Layer 2 fault domains.
References
For additional information, refer to these resources:
1-200
Cisco Systems, Inc. Cisco Nexus 7000 Series NX-OS Software configuration guides:
http://www.cisco.com/en/US/products/ps9402/products_installation_and_configuration_gui
des_list.html
Cisco Systems, Inc. Cisco Nexus 7000 Series Switches Release Notes:
http://www.cisco.com/en/US/products/ps9402/prod_release_notes_list.html
Cisco Systems, Inc. Cisco Nexus 2000 Series Fabric Extender Software Configuration
Guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus2000/sw/configuration/guide/r
el_6_0/b_Configuring_the_Cisco_Nexus_2000_Series_Fabric_Extender_rel_6_0.html
Cisco Systems, Inc. Cisco Nexus 5000 Series NX-OS Software configuration guides:
http://www.cisco.com/en/US/products/ps9670/products_installation_and_configuration_gui
des_list.html
Cisco Systems, Inc. Cisco Nexus 5000 Series Switches Release Notes:
http://www.cisco.com/en/US/products/ps9670/prod_release_notes_list.html
Cisco Systems, Inc. Cisco Nexus 7000 Series Switches Data Sheets:
http://www.cisco.com/en/US/products/ps9402/products_data_sheets_list.html
Cisco Systems, Inc. Cisco Nexus 5000 Series Switches Data Sheets:
http://www.cisco.com/en/US/products/ps9670/products_data_sheets_list.html
Cisco Systems, Inc. Cisco MDS 9500 Series Multilayer Directors Data Sheets:
http://www.cisco.com/en/US/products/ps5990/products_data_sheets_list.html
Cisco Systems, Inc. Cisco MDS 9100 Series Multilayer Fabric Switches Data Sheets:
http://www.cisco.com/en/US/products/ps5987/products_data_sheets_list.html
Cisco Systems, Inc. Cisco MDS 9200 Series Multilayer Switches Data Sheets:
http://www.cisco.com/en/US/products/ps5988/products_data_sheets_list.html
1-201
1-202
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)
Which of these data center-related components typically would not be used in the SAN
infrastructure design? (Source: Examining Cisco Data Center Functional Layers)
A)
B)
C)
D)
Q2)
Which type of flow control is used in the SAN environment? (Source: Examining
Cisco Data Center Functional Layers)
A)
B)
C)
D)
Q3)
Which product supports a front-to-back airflow to address the requirements for hotaisle and cold-aisle deployments without additional complexity? (Source: Reviewing
the Cisco Nexus Product Family)
A)
B)
C)
D)
core-edge
multitier
collapsed aggregation
collapsed core
Which Cisco Nexus product supports unified ports? (Source: Reviewing the Cisco
Nexus Product Family)
A)
B)
C)
D)
Q6)
virtualization
unified fabric
unified computing
unified data center
Which of these provides the most efficient use of ports in a SAN infrastructure because
fewer or no ports are consumed for ISLs? (Source: Examining Cisco Data Center
Functional Layers)
A)
B)
C)
D)
Q5)
FCoE is part of which main component of the data center architecture? (Source:
Examining Cisco Data Center Functional Layers)
A)
B)
C)
D)
Q4)
access
aggregation
core
collapsed core
1-203
Q7)
Which license is required to support VDCs on the Cisco Nexus 7000 Series Switches?
(Source: Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q8)
How many ports are there on the Cisco Nexus 7000 40 Gigabit Ethernet module?
(Source: Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q9)
8
12
24
32
Which product has a fixed module and one slot for an I/O module? (Source: Reviewing
the Cisco MDS Product Family)
A)
B)
C)
D)
1-204
4
8
12
16
How many Cisco Nexus 2000 Series Fabric Extenders can be connected to a Cisco
Nexus 5500 Platform switch with no Layer 3 daughter card or expansion module?
(Source: Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q13)
1
2
3
4
How many ports on the Cisco Nexus 5010 Switch support 1 Gb/s connectivity on the
base module? (Source: Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q12)
combined
power supply and input source redundancy
power supply redundancy
input source redundancy
How many expansion slots are there for the Cisco Nexus 5548 Switch? (Source:
Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q11)
2
4
6
8
Which power supply redundancy mode provides grid redundancy only? (Source:
Reviewing the Cisco Nexus Product Family)
A)
B)
C)
D)
Q10)
Base license
Enterprise license
Advanced LAN Enterprise license
Scalable Services license
Q14)
Which supervisor module meets the minimum requirements that are required to support
FCoE on the Cisco MDS 9000 Series Switches? (Source: Reviewing the Cisco MDS
Product Family)
A)
B)
C)
D)
Q15)
What is the total aggregate bandwidth that is provided on the Cisco MDS 9513
Multilayer Director switch? (Source: Reviewing the Cisco MDS Product Family)
A)
B)
C)
D)
Q16)
Which command would be used to identify the impact of an ISSU event? (Source:
Monitoring the Cisco Nexus 7000 and 5000 Series Switches)
A)
B)
C)
D)
console port
management 0 port
any physical interface
CMP
Which command would be used to connect to the CMP from the control processor on a
Cisco Nexus 7000 Series switch? (Source: Monitoring the Cisco Nexus 7000 and 5000
Series Switches)
A)
B)
C)
D)
Q20)
4
8
16
24
Which management port on the Cisco Nexus 7000 Series Switches provides lights-out
RMON and management without the need for separate terminal servers? (Source:
Monitoring the Cisco Nexus 7000 and 5000 Series Switches)
A)
B)
C)
D)
Q19)
120
90
60
30
How many ports are enabled in the port-based license on a Cisco MDS 9124 Multilayer
Fabric Switch? (Source: Reviewing Cisco MDS Product Family)
A)
B)
C)
D)
Q18)
760 Gb/s
1.1 Tb/s
2.2 Tb/s
4.1 Tb/s
For how many days may licensed features be evaluated on the Cisco MDS Series
switch? (Source: Reviewing the Cisco MDS Product Family)
A)
B)
C)
D)
Q17)
Supervisor-3
Supervisor-2A
Supervisor-2
Supervisor-1
1-205
Q21)
Which command is a valid command for viewing the details of a 10-Gigabit Ethernet
interface? (Source: Monitoring the Cisco Nexus 7000 and 5000 Series Switches)
A)
B)
C)
D)
Q22)
Which of these best describes the function of Cisco Fabric Services in a vPC domain?
(Source: Describing vPCs and Cisco FabricPath in the Data Center)
A)
B)
C)
D)
Q23)
4
6
8
16
Which protocol is used as the control protocol for Cisco FabricPath? (Source:
Describing vPCs and Cisco FabricPath in the Data Center)
A)
B)
C)
D)
1-206
Cisco FabricPath provides ECMP capabilities. How many ECMP paths does it
support? (Source: Describing vPCs and Cisco FabricPath in the Data Center)
A)
B)
C)
D)
Q27)
vPC peer
vPC peer keepalive
vPC peer link
vPC link
Which command would be used to find specific parameters that caused a consistency
check to fail during a vPC configuration? (Source: vPCs and Cisco FabricPath in the
Data Center)
A)
B)
C)
D)
Q26)
Which link is used in a virtual port channel to create the illusion of a single control
plane? (Source: Describing vPCs and Cisco FabricPath in the Data Center)
A)
B)
C)
D)
Q25)
Which command would you use to verify the status of configured virtual port channels
on the switch? (Source: Describing vPCs and Cisco FabricPath in the Data Center)
A)
B)
C)
D)
Q24)
Layer 2 IS-IS
Layer 3 IS-IS
Layers 2 and 3 IS-IS
IS-IS
Q2)
Q3)
Q4)
Q5)
Q6)
Q7)
Q8)
Q9)
Q10)
Q11)
Q12)
Q13)
Q14)
Q15)
Q16)
Q17)
Q18)
Q19)
Q20)
Q21)
Q22)
Q23)
Q24)
Q25)
Q26)
Q27)
1-207
1-208
Module 2
Module Objectives
Upon completing this module, you will be able describe the function of Cisco data center
virtualization and verify correct operation. This ability includes being able to meet these
objectives:
Describe how RAID groups and LUNs virtualize storage for high availability and
configuration flexibility
Describe the problems that Cisco Nexus 1000V Series switches solve and how the Cisco
Nexus 1000V VSM and VEM integrate with VMware ESX
Validate connectivity of the Cisco Nexus 1000V VSM to VEMs and VMware vCenter, by
using the VMware ESX and Cisco Nexus 1000V CLIs
2-2
Lesson 1
Objectives
Upon completing this lesson, you will be able to describe the virtualization capabilities of Cisco
Nexus 7000 and 5000 Series Switches. You will be able to meet these objectives:
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VDC
Extranet
VDC
DMZ
VDC
Prod
DCICTv1.02-4
Data centers are often partitioned into separate domains or zones that are implemented on
separate physical infrastructures. The creation of separate physical infrastructures is commonly
driven by a need to separate administrative domains for security and policy reasons.
VLANs and virtual routing and forwarding (VRF) instances can be used to separate user traffic
on the data plane. However, these technologies do not provide separation of administration and
management functions or isolation of fault domains.
Building separate physical infrastructures to separate zones by administrative or security policy
can add significant cost to the infrastructure. Depending on the port counts and the functions
that are needed in the separate domains, the physical switches in each domain might be
underutilized. Consolidation of multiple logical switches on one physical switch can improve
hardware utilization. Consolidation can also add flexibility to the data center design.
VDCs allow one physical Cisco Nexus 7000 Series switch to be partitioned into multiple
logical switches. This partitioning enables the consolidation of different logical zones onto one
physical infrastructure.
2-4
Enterprise
Network
Split Data
Center Core
- Service insertion
Aggregation
Blocks
to Left VDC
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Aggregation
Blocks
to Right VDC
DCICTv1.02-5
A VDC has the same functional characteristics as a physical switch. The VDC can be used in
many places in the overall data center network design.
A major advantage of using VDCs instead of separate physical switches is that physical ports
can easily be reallocated between VDCs. This capability allows for ease of changes and
additions to the network design as the network grows and evolves.
The following scenarios can benefit from the use of VDCs. Because a VDC has characteristics
and capabilities like those of a separate physical switch, these scenarios are not VDC-specific
topologies; they could be built using separate dedicated switches in roles that are occupied by
VDCs. However, VDCs can provide additional design flexibility and efficiency in these
scenarios.
Dual-Core Topology
VDCs can be used to build two redundant data center cores using only a pair of Cisco Nexus
7000 Series Switches. This technique can be useful to facilitate migration when the enterprise
network needs to expand to support mergers and acquisitions. If sufficient ports are available
on the existing data center core switches, then two additional VDCs can be created for a
separate data center core. This approach allows a second data center network to be built
alongside the original one. This second network can be built without any impact on the existing
network. Eventually, aggregation blocks can be migrated from one core to the other by
reallocating interfaces from one VDC to the other.
2-5
Enterprise
Network
Data
Center Core
Multiple
Aggregation
VDCs
Access
Enterprise
Network
Core
Aggregation
VDC
Cisco 6500
Series
Services
Chassis
Subaggregation
VDC
Access
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-6
Service Insertion
VRFs are often used to create a Layer 3 hop that separates the servers in the access network
from the services in the service chain and the aggregation layer. This approach creates a
services sandwich consisting of two VRFs with the services chain in between. Instead of
VRFs, two VDCs can be used to create this services sandwich. In addition to the control plane
and data plane separation that VRFs provide, a VDC provides management plane separation
and fault isolation. The VDC services sandwich design increases security by logically
separating the switches on the inside and outside of the services chain.
2-6
Enterprise
Network
Data
Center Core
Multiple
Aggregation
VDCs
Access
Enterprise
Network
Core
Aggregation
VDC
Cisco 6500
Series
Services
Chassis
Subaggregation
VDC
Access
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-6
The Cisco Nexus 7000 Series switch uses several virtualization technologies that are already
present in Cisco IOS Software. At Layer 2, you have VLANs and at Layer 3, you have VRFs.
These two features are used to virtualize the Layer 3 forwarding and routing tables. The Cisco
Nexus 7000 Series switch extends this virtualization concept to VDCs that virtualize the device
itself, by presenting the physical switch as multiple, independent logical devices.
Within each VDC is a set of unique and independent VLANs and VRFs, with physical ports
assigned to each VDC. This independence also allows virtualization of both the hardware data
plane and the separate management domain.
In its default state, the switch control plane runs as a single device context, called VDC 1,
which runs approximately 80 processes. Some processes have other spawned threads, which
can result in as many as 250 processes actively running on the system at any given time. This
collection of processes constitutes what is seen as the control and management plane for a
single physical device without any other VDCs enabled. The default VDC 1 is always active,
always enabled, and can never be deleted. Even if no other VDC is created, support for
virtualization through VRFs and VLANs within VDC 1 is available.
The Cisco Nexus 7000 Series switch can support multiple VDCs. The creation of additional
VDCs replicates these processes for each device context that is created. The hardware resources
on the supervisor and I/O modules are shared between the VDCs. The processes of the different
VDCs share the kernel and infrastructure modules of the Cisco Nexus Operating System (NXOS) Software, but the processes within each VDC are entirely independent.
2-7
VDC 2
VDC 3
VDC 4
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VDC 1
Cisco Nexus 7000 Series
Switch
DCICTv1.02-8
The use of VDCs currently allows one Cisco Nexus 7000 Series switch to be partitioned into as
many as four logical switches: the default VDC and three additional VDCs. Initially, all
hardware resources of the switch belong to the default VDC. When you first configure a Cisco
Nexus 7000 Series switch, you are effectively configuring the default VDC (VDC 1). The
default VDC has a special role: It controls all hardware resources and has access to all other
VDCs. VDCs are always created from the default VDC. Hardware resources, such as interfaces
and memory, are also allocated to the other VDCs from the default VDC. The default VDC can
access and manage all other VDCs. However, the additional VDCs have access only to the
resources that are allocated to them and cannot access any other VDCs.
VDCs are truly separate virtual switches (vSwitches). They do not share any processes or data
structures, and traffic can never be forwarded from one VDC to another VDC inside the
chassis. Any traffic that needs to pass between two VDCs in the same chassis must first leave
the originating VDC through a port that is allocated to it. The originating VDC then enters the
destination VDC through a port that is allocated to that VDC. VDCs are separated on the data,
control, and management planes. The only exception to this separation is the default VDC,
which can interact with the other VDCs on the management plane. Control and data plane
functions of the default VDC are still separated from the other VDCs.
The default VDC has several other unique and crucial roles in the function of the switch:
2-8
Systemwide parameters such as Control Plane Policing (CoPP), VDC resource allocation,
and Network Time Protocol (NTP) may be configured from the default VDC.
Licensing of the switch for software features is controlled from the default VDC.
Software installation must be performed from the default VDC. All VDCs run the same
version of software.
Reloads of the entire switch may be issued only from the default VDC. Nondefault VDCs
may be reloaded independently of other VDCs.
If a switch might be used in a multiple-VDC configuration, then the default VDC should be
reserved for administrative functions only. Configure all production network connections in
nondefault VDCs. This approach will provide flexibility and higher security. Administrative
access can easily be granted into the nondefault VDCs to perform configuration functions,
without exposing access to reload the entire switch or change software versions. No Layer 3
interfaces in the default VDC need to be exposed to the production data network. Only the
management interface needs to be accessible through an out-of-band (OOB) management path.
Unused interfaces may be retained in a shutdown state in the default VDC as a holding area
until they are needed in the configuration of a nondefault VDC. In this way, the default VDC
may be maintained as an administrative context requiring console access or separate security
credentials. Following this guideline effectively allows one Cisco Nexus 7000 Series switch to
perform the functional roles of as many as three production switches.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VRF1 VRF
VRF2 VRF
VLAN3 VLAN
VRF3 VRF
VLAN1 VLAN
VRF1 VRF
VRF2 VRF
VLAN3 VLAN
VRF3 VRF
VLAN1 VLAN
VRF1 VRF
VRF2 VRF
VLAN3 VLAN
VRF3 VRF
DCICTv1.02-9
The use of VDCs does not restrict the use of VLANs and VRFs. Within each VDC, you can
create VLANs and VRFs. The VLANs and VRFs in a VDC are entirely independent of and
isolated from the VLANs and VRFs in any other VDC.
Because VDCs are independent, VLAN numbers and VRF names can be reused in different
VDCs. There is no internal connection between the VLANs and VRFs in different VDCs. To
connect a VLAN or VRF in one VDC to a VLAN or VRF in a different VDC, an external
connection is required. VDCs truly behave as completely separate logical switches.
2-9
VDC 1
VDC 2
VDC 3
Routing Protocols
Routing Protocols
Routing Protocols
VFR1
VFR-n
VFR1
HRSP
EthPM
VMM
VFR1
HRSP
GLPB
CTS
STB
VFR-n
RIB
EthPM
HRSP
GLPB
VMM
2012Ciscoand/oritsaffiliates.Allrightsreserved.
CTS
STB
VFR-n
RIB
EthPM
VMM
GLPB
CTS
STB
RIB
DCICTv1.02-10
When multiple VDCs are created in a physical switch, the architecture of the VDC feature
provides a means to prevent failures within any VDC from affecting other VDCs. For example,
a spanning-tree recalculation that is started in one VDC does not affect the spanning-tree
domains of other VDCs in the same physical chassis. Each recalculation is an entirely
independent process. The same applies to other processes, such as the Open Shortest Path First
(OSPF) process. Network topology changes in one VDC do not affect other VDCs on the same
switch.
Because the Cisco NX-OS Software uses separate processes in each VDC, the fault isolation
extends even to potential software process crashes. If a process crashes in one VDC, then that
crash is isolated from other VDCs. The Cisco NX-OS high-availability features, such as stateful
process restart, can be applied independently to the processes in each VDC. Process isolation
within a VDC is important for fault isolation and is a major benefit for organizations that
implement the VDC concept.
In addition, fault isolation is enhanced with the ability to provide per-VDC debug commands
and per-VDC logging of messages from syslog. These features give administrators the ability to
locate problems within their own VDC.
2-10
VDC
A
N7K-M132XP-12
VDC
B
VDC
C
VDC
C
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-11
Physical ports are allocated to different VDCs from the default VDC. Logical interfaces, such
as switch virtual interfaces (SVIs), subinterfaces, or tunnel interfaces, cannot be assigned to a
VDC. Logical interfaces are always created in the VDC to which they belong. After a physical
port is assigned to a VDC, all subsequent configuration of that port is performed within that
VDC. Within a VDC, both physical and logical interfaces can be assigned to VLANs or VRFs.
On most I/O modules, any port can be individually assigned to any VDC. The exceptions to
this rule are the N7K-M132XP-12, F1, and F2 I/O modules. On these modules, interfaces can
be assigned to a VDC on a per-port-group basis only.
Interfaces on all other I/O modules can be assigned to VDCs on a perport basis.
VDC
A
VDC
C
Port Group 2
Port Group 4
N7K-M148GT-11
Port Group 1
VDC
B
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Port Group 3
VDC
C
DCICTv1.02-12
2-11
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-13
The Cisco NX-OS Software uses role-based access control (RBAC) to control the access rights
of users on the switch. By default, Cisco Nexus 7000 Series Switches recognize four roles:
Network-admin: The first user account that is created on a Cisco Nexus 7000 Series
switch in the default VDC is admin. The network-admin role is automatically assigned to
this user. The network-admin role gives a user complete control over the default VDC of
the switch. This role includes the ability to create, delete, or change nondefault VDCs.
Network-operator: The second default role that exists on Cisco Nexus 7000 Series
Switches is the network-operator role. This role allows the user read-only rights in the
default VDC. The network-operator role includes the right to issue the switchto command,
which can be used to access a nondefault VDC from the default VDC. By default, no users
are assigned to this role. The role must be assigned specifically to a user by a user that has
network-admin rights.
VDC-admin: When a new VDC is created, the first user account on that VDC is admin.
The VDC-admin role is automatically assigned to this admin user on a nondefault VDC.
This role gives a user complete control over the specific nondefault VDC. However, this
user does not have any rights in any other VDC and cannot access other VDCs through the
switchto command.
VDC-operator: The VDC-operator role has read-only rights for a specific VDC. This role
has no rights to any other VDC.
2-12
VDCs are created from within the default VDC global configuration
context.
- The network administrator role is required to create, delete, or modify VDCs.
Physical and logical resources are assigned to VDCs from the default
VDC global configuration context.
- When a physical port is assigned to a VDC, that port can be configured from
within that VDC only.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-14
Consider the following issues when implementing VDCs. To use VDCs, the Advanced
Services license needs to be installed on a Cisco Nexus 7000 Series switch. You can try the
feature during a 120-day grace period. However, when the grace period expires, any nondefault
VDCs will be removed from the switch configuration. Any existing processes for those VDCs
will be terminated.
VDCs can be created, deleted, or changed from the default VDC only. You cannot create VDCs
from a nondefault VDC. To create VDCs, a user needs to have network-admin rights in the
default VDC.
Physical interfaces and other resources are always assigned to nondefault VDCs from the
default VDC. After a physical interface has been assigned to a specific VDC, the configuration
for that interface is performed from that nondefault VDC. You cannot configure an interface
from any VDC other than the VDC to which the interface is allocated.
2-13
Initially, all chassis physical resources are part of the default VDC.
All configuration for the interface is lost when you allocate an interface to
another VDC.
When a VDC is deleted, the operation of the VDC is disrupted and all
resources are returned to the default VDC.
To remove an interface from a nondefault VDC and return it to the
default VDC, you must enter VDC configuration mode in the default VDC
and allocate the interface to the default VDC.
- This action cannot be performed from a nondefault VDC.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-15
Initially, all physical resources are assigned to the default VDC. When interfaces are
reallocated to a different VDC, any existing configuration on the interface is removed.
When a VDC is removed, all resources that are associated with that VDC are returned to the
default. All processes that belong to the VDC are terminated, and forwarding information for
the VDC is removed from the forwarding engines.
You cannot move interfaces from a nondefault VDC to the default VDC from within the
nondefault VDC itself. To remove a physical interface from a nondefault VDC, you must enter
configuration mode in the default VDC and reallocate the interface to the default VDC.
Note
When you configure different VDCs from the default VDC, verify that you are configuring the
correct VDC. Accidentally making changes to the wrong VDC can affect switch operations.
2-14
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-17
The show feature command is used to verify which features have been specifically enabled on
the Cisco Nexus switches.
2-15
vdc_name
-------N7K-1
RED
BLUE
state
----active
active
active
mac
--------00:18:ba:d8:3f:fd
00:18:ba:d8:3f:fe
00:18:ba:d8:3f:ff
vdc_name
-------RED
state
----active
2012Ciscoand/oritsaffiliates.Allrightsreserved.
mac
---------00:18:ba:d8:3f:fe
DCICTv1.02-18
The scope of the show vdc commands depends on the VDC in which the commands are
executed. When these commands are executed in a nondefault VDC, the displayed information
is restricted to that VDC only. If these commands are executed from the default VDC, then they
display information on all VDCs, unless a specific VDC is entered as a command option.
Issuing the show vdc command from within the default VDC context lists all the active and
current VDCs. The default VDC has visibility over all nondefault VDCs.
Issuing the show vdc command within a nondefault VDC context provides information about
that VDC only. Nondefault VDCs have no visibility to one another or to the default VDC.
2-16
id: 1
name: N7K-1
state: active
mac address: 00:18:ba:d8:3f:fd
ha policy: RELOAD
dual-sup ha policy: SWITCHOVER
boot Order: 1
create time: Sun Jan
2 04:02:58 2011
reload count: 0
restart count: 0
vdc
vdc
vdc
vdc
vdc
vdc
vdc
vdc
vdc
vdc
id: 2
name: RED
state: active
mac address: 00:18:ba:d8:3f:fe
ha policy: RESTART
dual-sup ha policy: SWITCHOVER
boot Order: 1
create time: Sat Jan 22 22:47:17 2011
reload count: 0
restart count: 0
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-19
The show vdc detail command provides more detailed information about the VDCs, including
name, state, and high-availability policies.
This example shows how to verify VDC interface information from within
the default VDC.
N7K-1# show vdc membership
vdc_id: 1 vdc_name: N7K-1 interfaces:
Ethernet2/1
Ethernet2/2
Ethernet2/3 Ethernet2/4 Ethernet2/5
Ethernet2/6
Ethernet2/7
Ethernet2/8 Ethernet2/9 Ethernet2/10
Ethernet2/11 Ethernet2/12 Ethernet2/13 Ethernet2/14 Ethernet2/15
Ethernet2/16 Ethernet2/17 Ethernet2/18 Ethernet2/19 Ethernet2/20
Ethernet2/21 Ethernet2/22 Ethernet2/23 Ethernet2/24 Ethernet2/25
Ethernet2/26 Ethernet2/27 Ethernet2/28 Ethernet2/29 Ethernet2/30
Ethernet2/31 Ethernet2/32 Ethernet2/33 Ethernet2/34 Ethernet2/35
Ethernet2/36 Ethernet2/37 Ethernet2/38 Ethernet2/39 Ethernet2/40
Ethernet2/41 Ethernet2/42 Ethernet2/43 Ethernet2/44 Ethernet2/45
Ethernet2/48
vdc_id: 2 vdc_name: RED interfaces:
Ethernet2/47
vdc_id: 3 vdc_name: BLUE interfaces:
Ethernet2/46
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-20
The show vdc membership command can be used to display the interfaces that are allocated to
the VDCs.
2-17
From the default VDC, the running configuration for all VDCs on the
device can be saved to the startup configuration by using one command.
The running configurations for all VDCs can be viewed from the default
VDC.
N7K-1# copy running-config startup-config vdc-all
N7K-1# show running-config vdc-all
!Running config for default vdc: N7K-1
version 5.0(3)
license grace-period
no hardware ip verify address identical
<output omitted>
!Running config for vdc: RED
switchto vdc RED
version 5.0(3)
feature telnet
<further output omitted>
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-21
You can save the running configuration for all VDCs by using the copy running-config
startup-config vdc-all command. The show running-config vdc-all command displays the
current configuration files for all VDCs.
Both commands must be issued from the default VDC, which has visibility for all VDCs. You
cannot view the configuration of other VDCs from a nondefault VDC.
2-18
From the default VDC, you can access nondefault VDCs by using the
switchto command.
N7K-1# switchto vdc RED
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
N7K-1-RED#
To switch from a nondefault VDC back to the default VDC, use the
switchback command.
N7K-1-RED# switchback
N7K-1#
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-23
You can navigate between the default and nondefault VDCs by using the switchto vdc
command. This action changes the context from the default to the specified nondefault VDC.
This command cannot be used to navigate directly between nondefault VDCs. To navigate
from one nondefault VDC to another, you must first issue the switchback command to return
to the default VDC. You can follow that command with a switchto command to enter the
configuration context for the desired nondefault VDC. This command is necessary to perform
the initial setup of the VDCs. When user accounts and IP connectivity are configured properly,
the VDC can be accessed over the network by using Secure Shell (SSH) or Telnet.
2-19
Cisco Nexus 2000 Series fabric extenders serve as remote line cards of
a Cisco Nexus 5000 or 7000 Series switch.
- FEXs are managed and configured from the Cisco Nexus switch.
Together, the Cisco Nexus switches and Cisco Nexus 2000 Series fabric
extenders combine benefits of ToR cabling with EoR management.
Cisco Nexus 7000 Series
10 Gigabit Ethernet
Cisco Nexus 2000
Series
Rack 1
2012Ciscoand/oritsaffiliates.Allrightsreserved.
... Rack 10
DCICTv1.02-25
Cisco Nexus 2000 Series Fabric Extenders can be deployed with Cisco Nexus 5000 or Cisco
Nexus 7000 Series Switches. This deployment can create a data center network that combines
the advantages of a top-of-rack (ToR) design with the advantages of an end-of-row (EoR)
design.
Dual redundant Cisco Nexus 2000 Series Fabric Extenders are placed at the top of each rack.
The uplink ports on the fabric extenders (FEXs) are connected to a Cisco Nexus 5000 or Cisco
Nexus 7000 Series switch that is installed in the EoR position. From a cabling standpoint, this
design is a ToR design. The cabling between the servers and the Cisco Nexus 2000 Series
fabric extender is contained within the rack. Only a limited number of cables need to be run
between the racks to support the 10 Gigabit Ethernet connections between the FEXs and the
Cisco Nexus switches in the EoR position.
From a network-deployment standpoint, however, this design is an EoR design. The FEXs act
as remote line cards for the Cisco Nexus switches, so the ports on the Cisco Nexus 2000 Series
Switches act as ports on the associated switch. In the logical network topology, the FEXs
disappear and all servers appear as directly connected to the Cisco Nexus switch. From a
network-operations perspective, this design has the simplicity that is typically associated with
EoR designs. All the configuration tasks for this type of data center design are performed on the
EoR switches. No configuration or software maintenance tasks are associated with the FEXs.
2-20
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-26
All Cisco Nexus 2000 Series platforms are controlled by a parent Cisco Nexus switch. The
FEX is not a switch. When connected to the parent switch, the FEX operates as a line card
(module) of the parent switch. The parent switch controls all configuration, management, and
software updates. In the data plane, all forwarding, security, and quality of service (QoS)
decisions are made at the parent switch. Fibre Channel over Ethernet (FCoE) can be supported
with the Cisco Nexus 2232 Fabric Extender.
FEX host
interfaces
FEX host
interfaces type
FEX fabric
interfaces
Fabric speed
Oversubscription
Performance
Minimum
software
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Cisco Nexus
2224TP GE
Cisco Nexus
2248TP-E
Cisco Nexus
2232PP
24
48
32
100 BASE-T/1000
BASE-T ports
100 BASE-T/1000
BASE-T ports
20 Gb/s
40 Gb/s
80 Gb/s
1.2:1
1.2:1
4:1
65 mpps
131 mpps
595 mpps
Cisco NX-OS
Release 5.2
Cisco NX-OS
Release 5.1
Cisco NX-OS
Release 5.2
DCICTv1.02-27
The table lists FEXs that are supported on the Cisco Nexus 7000 Series switch.
2-21
Data plane
- Forwarding ASIC redundancy
- Fabric ASIC redundancy
Fabric
- Isolated and redundant paths
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-28
Control plane
Data plane
Fabric
2-22
Supervisor redundancy
Power supply
Fan
Data plane:
- Forwarding is performed on the parent switch ASICs.
VN-Tag
Added
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-29
The Cisco Nexus 2000 Series fabric extender operates as a remote line card to the Cisco Nexus
switch. All control and management functions are performed by the parent switch. Forwarding
is also performed by the parent switch.
Physical host interfaces on the Cisco Nexus 2000 Series fabric extender are represented with
logical interfaces on the parent Cisco Nexus switch.
Packets sent to and from the Cisco Nexus 2000 Series fabric extender have a virtual network
tag (VNTag) added to them so that the upstream switch knows how to treat the packets and
which policies to apply. This tag does not affect the host, which is unaware of any tagging.
The link or links between the Cisco Nexus 2000 Series fabric extender and the upstream switch
multiplex the traffic from multiple devices that connect to the FEX.
2-23
Destination VIF
Source VIF
L R ver
MAC DA
[6]
MAC SA
[6]
VNTag
[6]
802.1Q
[4]
TL
[2]
Frame Payload
2012Ciscoand/oritsaffiliates.Allrightsreserved.
CRC
[4]
DCICTv1.02-30
2-24
A Host-to-Network
Forwarding Part 1
Host-to-Network
Forwarding Part 2
4
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Network-to-Host
Forwarding
1
DCICTv1.02-31
The figure describes packet processing on the Cisco Nexus 2000 Series fabric extender.
When the host sends a packet to the network (diagrams A and B), these events occur:
1. The frame arrives from the host.
2. The Cisco Nexus 2000 Series switch adds a VNTag, and the packet is forwarded over a
fabric link, using a specific VNTag. The Cisco Nexus 2000 Series switch adds a unique
VNTag for each Cisco Nexus 2000 Series host interface. These are the VNTag field values:
The source virtual interface is set based on the ingress host interface.
The p (pointer), l (looped), and destination virtual interface are undefined (0).
3. The packet is received over the fabric link, using a specific VNTag. The Cisco Nexus
switch extracts the VNTag, which identifies the logical interface that corresponds to the
physical host interface on the Cisco Nexus 2000 Series. The Cisco Nexus switch applies an
ingress policy that is based on the physical Cisco Nexus Series switch port and logical
interface:
Access control and forwarding are based on frame fields and virtual (logical)
interface policy.
Physical link-level properties are based on the Cisco Nexus Series switch port.
4. The Cisco Nexus switch strips the VNTag and sends the packet to the network.
2-25
When the network sends a packet to the host (diagram C), these events occur:
1. The frame is received on the physical or logical interface. The Cisco Nexus switch
performs standard lookup and policy processing, when the egress port is determined to be a
logical interface (Cisco Nexus 2000 Series) port. The Cisco Nexus switch inserts a VNTag
with these characteristics:
The destination virtual interface is set to be the Cisco Nexus 2000 Series port
VNTag.
The source virtual interface is set if the packet was sourced from a Cisco Nexus
2000 Series port.
The l (looped) bit filter is set if sending back to a source Cisco Nexus 2000 Series
switch.
The p bit is set if this frame is a multicast frame and requires egress replication.
2-26
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-32
When a FEX is connected to a Cisco Nexus 7000 Series switch, it should be connected by
using a 32-port 10-Gb/s M1 module.
VDC
A
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VDC
C
DCICTv1.02-33
When you use VDCs on a Cisco Nexus 7000 Series switch, all FEX fabric links should be in
the same VDC. In addition, all FEX host ports belong to the same VDC to which the FEX
fabric links are connected.
2-27
FEX
Fabric port
Fabric port channel
FEX uplink
FEX port
Fabric
Port
Fabric Port
Channel
FEX Uplink
FEX
FEX Port
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-34
2-28
Fabric port: The Cisco Nexus switch side of the link that is connected to the FEX is called
the fabric port.
Fabric port channel: The fabric port channel is the port channel between the Cisco Nexus
switch and FEX.
FEX uplink: The network-facing port on the FEX that is, the FEX side of the link that is
connected to the Cisco Nexus switch) is called the FEX uplink. The FEX uplink is also
referenced as a network interface.
FEX port: The server-facing port on the FEX is called the FEX port, also referred to as
server port or host port. The FEX port is also referenced as a host interface.
Cisco Nexus 7000 Series Switches support only dynamic pinning, so the
FEX fabric interfaces must be members of a port channel.
N7K-1(config-vdc)# no allow feature-set fex
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-35
Configuration of a FEX on a Cisco Nexus 7000 Series switch is slightly different than the
configuration on a Cisco Nexus 5000 Series switch. This difference is partially caused by the
VDC-based architecture of Cisco Nexus 7000 Series Switches. Before any FEX can be
configured in any VDC, the services that the FEX feature requires must be installed in the
default VDC. To enable the use of the FEX feature set, use the install feature-set fex
command in the default VDC. After the FEX feature set has been installed in the default VDC,
you can enable the feature set in any VDC by using the feature-set fex command.
You can restrict the use of the FEX feature set to specific VDCs only. By default, all VDCs can
enable the FEX feature set after it has been installed in the default VDC. If you want to
disallow the use of FEXs in a specific VDC, you can use the no allow feature-set fex
command in VDC configuration mode for that VDC.
Another difference is that Cisco Nexus 5000 Series Switches do not support dynamic pinning.
Dynamic pinning makes it unnecessary to specify the maximum number of pinning interfaces
by using the pinning max-links command.
2-29
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-36
The example in the figure shows how to configure a Cisco Nexus 2000 Series fabric extender
for use with a Cisco Nexus 7000 Series switch. The example shows the configuration to enable
the FEX feature set in the default VDC, followed by the configuration in the nondefault VDC
to which the FEX is connected.
Note
The use of a port channel to associate the FEX is mandatory on Cisco Nexus 7000 Series
Switches.
2-30
The Ethernet interfaces that connect to the FEX are now fabric
interfaces.
N5K# show interface brief
-----------------------------------------------------------------------------Ethernet
VLAN
Type Mode
Status
Reason
Speed
Port
Interface
Ch #
-----------------------------------------------------------------------------Eth1/1
1
eth access down
SFP validation failed
10G(D) -Eth1/2
1
eth access down
SFP not inserted
10G(D) -Eth1/3
1
eth access up
none
10G(D) -Eth1/4
1
eth access up
none
10G(D) -Eth1/5
1
eth access down
SFP not inserted
10G(D) -Eth1/6
1
eth access down
SFP not inserted
10G(D) -Eth1/7
1
eth access down
SFP not inserted
10G(D) -Eth1/8
1
eth access down
SFP not inserted
10G(D) -Eth1/9
1
eth fabric up
none
10G(D) -Eth1/10
1
eth fabric up
none
10G(D) -Eth1/11
1
eth access up
none
10G(D) -Eth1/12
1
eth access up
none
10G(D)
Eth1/13
1
eth access up
none
10G(D) -Eth1/14
1
eth access up
none
10G(D) -Eth1/15
1
eth access up
none
10G(D) -Eth1/16
1
eth access up
none
10G(D) --
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-37
Use the show interface brief command to verify that the interfaces are configured as fexfabric interfaces. The mode of these interfaces is listed as fabric in the command output.
Note
The output in the figure is from a Cisco Nexus 5000 Series switch.
2-31
DESCR: "N2K-C2248TP-1GE
CHASSIS"
VID: V03 ,
SN: JAF1432CKHC
DESCR: "Fabric Extender Module: 48x1GE, 4x10GE Supervisor"
VID: V03 ,
SN: SSI141308T5
FEX0141
Online
N2K-C2248TP-1GE
JAF1432CKHC
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-38
The figure describes commands that are used to verify the FEX:
The show fex FEX-number command displays module information about a FEX.
The show inventory fex FEX-number displays inventory information for a FEX.
Note
The output in the figure is from a Cisco Nexus 7000 Series switch.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-39
The show fex detail command displays detailed information about the FEX.
2-32
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-40
The newly connected FEX interfaces appear in the running configuration of the parent Cisco
Nexus switch. The interfaces are listed as FEX number, module, and slot number designation,
in that order. These interfaces are the host (server) interfaces. The fabric (uplink) interfaces of
the FEX do not appear.
The show interface command displays the Ethernet parameters of the FEX host interfaces,
which are connected to the Cisco Nexus switch through the fabric interfaces and physically
resident on the Cisco Nexus 2000 Series switch.
Note
The output in the figure is from a Cisco Nexus 7000 Series switch.
2-33
Eth141/1/4
Eth141/1/8
Eth141/1/12
Eth141/1/16
Eth141/1/20
Eth141/1/24
Eth141/1/28
Eth141/1/32
Eth141/1/36
Eth141/1/40
Eth141/1/44
Eth141/1/48
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-41
The port pinning can be verified by issuing the show interface fex-intf command. The output
of this command displays each uplink and the set of FEX interfaces that are associated to that
uplink.
The output in the figure shows that all FEX interfaces are associated with uplink Ethernet 1 /9
and that no interfaces use interface Ethernet 1 /10.
Note
2-34
The output in the figure is from a Cisco Nexus 5000 Series switch.
All FEXs that are connected to the system should now be discovered
and have an associated virtual slot number.
N7K-1-RED# show interface fex-fabric
Fabric
Fabric
Fex
FEX
Fex Port
Port State
Uplink
Model
Serial
--------------------------------------------------------------141
Eth1/9
Active
1
N2K-C2248TP-1GE
JAF1419ECAC
141
Eth1/10
Active
2
N2K-C2248TP-1GE
JAF1419ECAC
--Eth1/11
Discovered 3
N2K-C2248TP-1GE
JAF1420AHHE
--Eth1/12
Discovered 4
N2K-C2248TP-1GE
JAF1420AHHE
The server-facing ports on the FEX can be configured from the Cisco
Nexus switch.
N7K-1-RED(config)# interface ethernet 141/1/13
N7K-1-RED(config-if)# description ServerX
N7K-1-RED(config-if)# switchport access vlan 10
N7K-1-RED(config-if)# shutdown
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-42
When the FEX is fully recognized by the Cisco Nexus switch and configured, use the show
interface fex-fabric command to display the discovered FEXs. You can also see the associated
FEX number, which represents the virtual slot number of the FEX in the Cisco Nexus switch
virtualized chassis in the active state.
The individual server-facing FEX ports can now be configured from the Cisco Nexus switch.
Note
The output in the figure is from a Cisco Nexus 7000 Series switch.
2-35
Summary
This topic summarizes the key points that were discussed in this lesson.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
2-36
DCICTv1.02-43
Lesson 2
Virtualizing Storage
Overview
The purpose of this lesson is to describe how storage is virtualized for high availability and
configuration flexibility.
Objectives
Upon completing this lesson, you will be able to describe storage virtualization. You will be
able to meet these objectives:
Partition 1
Partition 2
Partition 3
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-4
2-38
Server management:
Individually managed
Mirroring, striping, concatenation
coordinated with disk array groupings
Each host with different view of storage
Volume management:
Individually managed
Just-in-case provisioning
Stranded capacity
Snapshot within a disk array
RAID
HA Upgrades
Multiple Paths
Snapshots
Array-to-Array
Replication
Replication
Array-to-array replication
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-6
2-39
LUN masking:
LUN mapping:
- Server maps some or all visible
LUNs to volumes.
DCICTv1.02-7
In most SAN environments, each individual LUN must be discovered only by one server host
bus adapter (HBA). Otherwise, the same volume will be accessed by more than one file system,
leading to potential loss of data or security. There are basically three ways to prevent this
multiple access:
LUN masking: LUN masking, a feature of enterprise storage arrays, provides basic LUNlevel security by allowing LUNs to be seen only by selected servers that are identified by
their port world wide name (pWWN). Each storage array vendor has its own management
and proprietary techniques for LUN masking in the array. In a heterogeneous environment
with arrays from different vendors, LUN management becomes more difficult.
LUN mapping: LUN mapping, a feature of Fibre Channel HBAs, allows the administrator
to selectively map some LUNs that have been discovered by the HBA. LUN mapping must
be configured on every HBA. In a large SAN, this mapping is a large management task.
Most administrators configure the HBA to automatically map all LUNs that the HBA
discovers. They then perform LUN management in the array (LUN masking) or in the
network (LUN zoning).
LUN zoning: LUN zoning, a proprietary technique that Cisco MDS switches offer, allows
LUNs to be selectively zoned to their appropriate host port. LUN zoning can be used
instead of, or in combination with, LUN masking in heterogeneous environments or where
Just a Bunch of Disks (JBODs) are installed.
Note
JBODs do not have a management function or controller and so do not support LUN
masking.
2-40
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-8
Server virtualization: A method of creating several virtual machines (VMs) from one
computing resource
2-41
Storage Virtualization
What is created:
Block
Virtualization
Disk
Virtualization
Tape
Virtualization
File System
Virtualization
File/Record
Virtualization
Where it is done:
Host- or
Server-Based
Virtualization
Storage ArrayBased
Virtualization
Network-Based
Virtualization
How it is implemented:
In-Band or
Symmetric
Virtualization
OOB or
Asymmetric
Virtualization
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-9
Host or server
The Storage Network Industry Association (SNIA) defines two main definitions for storage
system virtualization:
2-42
In-band (symmetric) virtualization: Data and control take the same path.
Out-of-band (OOB, asymmetric) virtualization: Data and control take different paths.
Host-Based Virtualization
Advantages:
Independent of storage platform
Independent of SAN transport
Software solution
Considerations:
High CPU overhead
Licensed and managed per host
Requires a software driver
Array-Based Virtualization
Advantages:
Independent of host platform or OS*
Closer to disk drives
High performance and scalable
Considerations:
Vendor proprietary
Management cost
Can be complex
Network-Based Virtualization
Advantages:
Independent of host platform or OS
Independent of storage platform
High performance and scalable
Considerations:
Vendor proprietary
Potential performance issues
Can have scalability issues
DCICTv1.02-10
Host-Based Virtualization
Host-based virtualization has certain advantages and disadvantages:
Advantages: This software solution is independent of the storage platform or vendor and
of the underlying SAN technology, Fibre Channel, Internet Small Computer Systems
Interface (iSCSI), and so on. Veritas offers a software-based solution.
Array-Based Virtualization
Advantages: The array is independent of the host platform or operating system. The
virtualization function is in the array controller and is much closer to the physical disk
drives. This closeness makes the solution more responsive, creates less SAN data traffic,
and provides higher performance and more scalability within the array.
Network-Based Virtualization
2-43
Storage Virtualization
FAIS is an ANSI T11 standards-based effort to create a common application programming
interface (API) for fabric applications to run on an underlying hardware platform. FAIS
supports storage functions that perform classic enterprise-level storage transformation
processesfor example, virtualization and RAID.
Note
Host-Based Virtualization
Advantages:
Independent of storage platform
Independent of SAN transport
Software solution
Considerations:
High CPU overhead
Licensed and managed per host
Requires a software driver
Array-Based Virtualization
Advantages:
Independent of host platform or OS*
Closer to disk drives
High performance and scalable
Considerations:
Vendor proprietary
Management cost
Can be complex
Network-Based Virtualization
Advantages:
Independent of host platform or OS
Independent of storage platform
High performance and scalable
Considerations:
Vendor proprietary
Potential performance issues
Can have scalability issues
DCICTv1.02-10
With symmetric virtualization, all I/Os and metadata are routed via a central virtualization
storage manager. Data and control messages use the same path. This design is architecturally
simpler but can create a bottleneck.
The virtualization engine does not need to reside in a completely separate device. The engine
can be embedded in the network as a specialized switch, or it can run on a server. To provide
alternate data paths and redundancy, two or more virtual storage management devices are
usually used. This redundancy can lead to issues of consistency between the metadata databases
that are used to perform the virtualization.
All data I/Os are forced through the virtualization appliance, restricting the SAN topologies that
can be used and possibly causing a bottleneck. The bottleneck is often addressed by using
caching and other techniques to maximize the performance of the engine. However, this
technique again increases complexity and leads to consistency problems between engines.
2-44
Considerations:
I
Virtualization
Manager
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-12
The server queries the metadata manager to determine the physical location of the data.
The server stores or retrieves the data directly across the SAN.
The metadata can be transferred in-band over the SAN or OOB over an Ethernet link. The latter
approach is more common because it avoids IP metadata traffic slowing the data traffic
throughput on the SAN. OOB transfer also does not require Fibre Channel HBAs that support
IP.
Each server that uses the virtualized part of the SAN must have a special interface or agent that
is installed to communicate with the metadata manager. The metadata manager translates the
logical data access to physical access for the server. This special interface might be software or
hardware.
2-45
Application Integration
Multipathing
- Data migration
- Highly resilient storage upgrades
Enables consolidation:
- Legacy investment protection
- Heterogeneous storage network
LUN Abstraction
Mirroring,
Striping
Snapshot
Replication
Virtualization
- Snapshots
- Replication
RAID
HA Upgrades
Multiple Paths
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-13
2-46
Servers are no longer responsible for volume management and data migration.
Existing and heterogeneous storage assets can be consolidated and fully used.
Summary
This topic summarizes the key points that were discussed in this lesson.
A LUN is a raw disk that can be partitioned into smaller LUNs. Host
systems are provided access to the smaller LUNs, therefore hiding the
raw disk from the system itself.
Host systems must not access the same storage area as other host
systems, or corruption of data can occur. To provide access to the
storage systems, storage system virtualization is used. This can be in
the form of OOB, in-band, or network-based storage virtualization.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-14
2-47
2-48
Lesson 3
Objectives
Upon completing this lesson, you will be able to describe the benefits of server virtualization in
the data center. You will be able to meet these objectives:
APP
APP
APP
APP
OS*
OS*
OS*
OS*
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-4
Standard deployments of applications on computer systems consist of one operating system that
is installed on a computer. To provide the greatest stability in business environments, good
practice dictates that one type of application should be present on an operating system.
Otherwise, compatibility issues can cause unforeseen problems.
Although it provides great stability, this approach is not very cost efficient. Over time, growth
in the number of applications greatly expands the number of servers in a data center.
2-50
Physical Server
Virtualized Server
Application
Application
Virtualization
CPU
Application
OS*
Network
OS*
Storage Network
Mem CPU
Mem CPU
Operating System
Hypervisor
Hardware
Hardware
Memory
Storage
Network
CPU
Storage
Memory
Storage
Network
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-5
In IT terms, virtualization is a procedure or technique that abstracts physical resources from the
services that they provide. In server virtualization, a layer is added between the server hardware
and the operating system or systems that are installed in addition to the hardware.
This strategy has several benefits:
The virtualization layer can segment the physical hardware into multiple, separate
resource units that all draw from the same physical pool. That means that you can have
multiple instances of the same type of operating system (or even different types of
operating systems) running in parallel on one server. These separate instances are all
completely independent of one another and, unless there is a hardware failure, have
similar independence. The instances behave as if they actually were running on
individual servers.
2-51
APP
OS*
APP
OS* APP
OS* APP
OS
DCICTv1.02-6
Server virtualization allows you to run multiple operating systems on one physical server. This
feature can be useful in testing or production environments.
For example, you can copy a production virtual operating system and then practice upgrade
procedures on it. This strategy ensures that you can try an upgrade procedure in an environment
that is the same as the production environment, without any mistakes affecting users. After you
have thoroughly tested the upgrade, evaluated potential consequences, and understood any
production impact, a real upgrade can take place.
2-52
Hardware
CPU
Mem
NIC
Web
Disk
CPU
Web
Windows
Linux 2.4
Mem
NIC
CPU
Mem
NIC
Linux 2.6
Disk
CPU
Mem
NIC
Disk
Linux 2.4
Hypervisor
Hardware
Database
Disk
Database
CPU
Mem
NIC
Disk
Hardware
Linux 2.6
CPU
Hardware
CPU
Mem
NIC
Memory
NIC
Disk
Disk
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-7
2-53
Hypervisor or VM Monitor
- Thin operating system between hardware and VM
- Controls and manages hardware resources
- Manages VMs (create, destroy, and so on)
VM
- Network
- Storage
CPU
Web
Windows
Linux 2.4
Mem
NIC
Disk
CPU
Mem
NIC
Database
Linux 2.6
Disk
CPU
Mem
NIC
Disk
Hypervisor
Hardware
CPU
Memory
2012Ciscoand/oritsaffiliates.Allrightsreserved.
NIC
Disk
DCICTv1.02-8
The abstraction layer that sits between the operating system and hardware is typically referred
to as a hypervisor. This layer is like an operating system but is not intended for installation of
applications. Rather, the hypervisor supports installation of multiple types of operating systems.
A hypervisor performs several tasks:
2-54
It uses the resources of the physical server (the host) on which it is installed to provide
smaller or greater chunks to the operating systems (virtual machines [VMs]) that are
installed on it.
It provides connectivity between individual VMs as well as between VMs and the
outside world.
Per VM:
VM
Application
Operating System
CPU
NIC
Memory
Disk
- vMAC address
- vIP address
- Memory, CPU, storage space
VM
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-9
A VM is a logical container that holds all the resources that an operating system requires for
normal operation. Components such as a graphics adapter, memory, processor, and networking
are present.
As far as the operating system in a VM is concerned, there is no difference between the
virtualized components and components in a physical PC or server. The difference is that these
components are not physically present, but are rather virtualized representations of host
resources. Processor or memory resources are a greater or smaller percentage of actual
resources. The virtualized hard disk might be a specially formatted file that is visible as a disk
to the virtualized (guest) operating system. The virtualized network interface card (NIC) might
simply be a simulated virtual component that the hypervisor manipulates into acting as a
physical component.
A VM is supposed to be as close an equivalent as possible of a physical PC or server. The VM
must contain the same set of physical identifiers as you would expect from any other such
device. These identifiers include the MAC address, IP address, universally unique identifier
(UUID) and world wide name (WWN) address.
One of the benefits of a VM is that the administrator can easily set such identifiers to a desired
value. This task can be done by simply manipulating the configuration file of the VM.
2-55
Partitioning
Isolation
VMware ESX
Hardware
Four Key
Properties
Hardware
Abstraction
Encapsulation
App
AppOSApp
App
App
App
App
OS AppOS AppOS AppOS App
OS
OS
OS
OS
OS
App
App
App
App
App
OS
OS
OS
OS
OS
OS
App
AppOSApp
OS
OS
VMware ESX
VMware ESX
Hardware
Hardware
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-10
VM Partitioning
VMs allow more efficient use of resources. A single host can serve many VMs, providing that
it has sufficient resources to do so.
In practice, the memory capacity of a host and the memory requirements of VMs are the
limiting factor.
A hypervisor on a host assigns sufficient resources to every VM that is defined.
VM Isolation
Security and reliability of VMs that share the same host is often a concern for prospective
clients. In practice, VMs in a virtualized environment enjoy the same level of separation or
security that is present in classic environments.
VMs that share the same host are completely isolated from one another. The only operational
hazard is improper design of shared resources, such as network bandwidth or disk access.
Although failure of a crucial hardware component such as a motherboard or a power supply can
bring down all the VMs that reside on the affected host, recovery can be much swifter than in a
classic environment. Other hosts in the virtual infrastructure can take over VMs from the failed
host, and downtime to the affected services is measured in minutes instead of hours.
VM Encapsulation
VMs are a set of files that describe them, define their resource usage, and specify unique
identifiers. As such, VMs are extremely simple to back up, modify, or even duplicate.
This feature provides benefits for everyday operations such as backup of VMs or deployments
in homogeneous environments such as classrooms.
2-56
VM Hardware Abstraction
VMs are easy to move between hosts. Abstraction can have several benefits:
Optimum performance: If a VM on a given host exceeds the resources of the host, then
that VM can be moved to another host that has sufficient resources.
Resource optimization: If the resource usage of one or more VMs has decreased, one or
more hosts might not be needed for a period. In such cases, VMs can be redistributed and
emptied hosts can be powered off to save on cooling and power.
CapEx savings:
Fewer physical servers for same
amount of services
Server re-use when needs
change over time
Consolidation on other data
center levels
2012Ciscoand/oritsaffiliates.Allrightsreserved.
OpEx savings:
Common driver environment for
better stability
Less cooling needed
Lower power consumption
DCICTv1.02-11
The figure describes cost savings that are associated with a virtualized infrastructure.
2-57
Virtualization is a technology
that transforms hardware into
software.
Virtualization allows you to run
multiple operating systems as
VMs on a single computer:
- Each copy of an operating system
is installed into a VM.
App
App
App
App
App
OS AppOS AppOS AppOS
OS
OS
OS
OS
VMware ESX
Hardware
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-13
Virtualization is a technique to take the physical resources of a host machine and turn them into
a pool. VMs can tap this pool, depending on what they need.
Whereas the VMs share the resources from the pool, the resources that are assigned to one VM
become unavailable to other VMs.
Virtualized operating systems are not simulated. They are complete, proper, off-the-shelf
operating systems that run on virtualized hardware.
2-58
Application
Application
Operating System
Operating System
CPU
CPU
NIC
Memory
Disk
NIC
Memory
Disk
Virtualization Layer
x86 Architecture
CPU
Memory
NIC
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Disk
DCICTv1.02-14
Bare-metal virtualization
Partial virtualization: When some (but not all) of the hardware environment is simulated,
certain guest software might need to be modified to run in the environment.
2-59
App
App
App
App
OS
OS
OS
OS
VMware Server
Windows or Linux Operating System
x86 Architecture
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-15
Host operating system-based virtualization requires a PC or server that already has an installed
operating system, such as a Microsoft Windows or a Linux environment. In addition to the
operating system, a virtualization application is installed. Within that application, VMs are
deployed.
The benefit of such an approach is that there is no need for a separate PC for everyday use and
a machine for virtualization. The drawback is that the host operating system uses up some
resources that could be assigned to the VMs.
An example of such virtualization is a VMware workstation.
This type of virtualization is most often used for application development and testing.
2-60
App
App
App
App
App
OS
OS
OS
OS
Service
Console
VMware Hypervisor
x86 Architecture
CPU
Memory
NIC
Disk
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-16
Bare-metal hypervisor virtualization means that the virtualization logic acts as its own mini
operating system. VMs are installed in addition to that mini operating system. This method
minimizes the amount of resources that are unavailable to the VMs. Also, this method avoids
potential bugs and security vulnerabilities of the host operating system.
Such an approach is often used for server deployments in which a server is dedicated as a host
and all its resources are dedicated for VM deployments.
VMware ESX and ESXi are examples of such an implementation.
2-61
VMware vSphere
vSphere Web
Access
vSphere Client
DRS
vSphere SDK
Consolidated
Backup
HA
Plug-In
vCenter Server
VMs
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
Virtual SMP
ESX/ESXi Hosts
VMFS
Enterprise
Servers
Enterprise
Network
Enterprise
Storage
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-17
VMware ESX and ESXi are two main products that are used for data center virtualization
deployments.
ESXi is a small-footprint version of the VMware ESX server, with installed space consumption
of approximately 70 MB. As such, ESXi is well positioned for installations to bootable USB
sticks or direct integration into server hardware itself.
Both ESX and ESXi servers have similar characteristics in all aspects of VM creation and
running.
Although ESXi is now the main product, many installations of ESX are still used in data
centers.
2-62
VM
VM
VM
VMware ESXi
Hardware
VM
VM
VM
VMware ESX
Hardware
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-18
The VMware ESX or ESXi environment can use multiple network connectivity options. A
standalone host can use a virtual switch (vSwitch). Multiple hosts, working as a group, can use
a distributed version of the vSwitch. To increase bandwidth and reliability, network teaming is
supported. For security and traffic separation, there is support for VLANs.
For storage requirements, there are multiple possibilities. Local storage can be used, although
doing so is highly unusual because it reduces the mobility options of VMs. Mount points on
either SAN or network attached storage (NAS) locations are more common.
With larger deployments, or for advanced functionalities such as VMware vMotion or a
distributed vSwitch, the VMware vCenter component can manage an ESX or ESXi
environment as a group.
VMware vSphere Client serves as a client-side application through which administrative tasks
are carried out.
2-63
vSphere
Client
vCLI
(Scripting)
vCenter
Server
App
CIM
(Hardware
Mgmt)
App
App
vSphere
API/SDK
App
App
VMM
VMM
VMM
VMM
VMware Hypervisor
VMkernel
CPU
Memory
NIC
Disk
*OS = Operating System
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-19
2-64
App
App
App
App
App
Service
Console
VMware Hypervisor
x86 Architecture
CPU
Memory
NIC
Disk
*OS = Operating System
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-20
Refered to as a VM
Behaves as a real PC or server
Gets resources from a host
Can be migrated between hosts
Application
Operating System
Virtualized Hardware
VM
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-21
2-65
Active
Directory
Domain
Distributed
Services
Active Directory
Interface
Additional
Services
Database
Interface
vCenter
Server
Database
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Core
Update
Manager
Services
Converter
User
Access
Control
vSphere
API
ThirdParty
Apps
Plug-In
ESX/ESXi Management
Hosts
DCICTv1.02-22
2-66
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-23
vSphere Client is a GUI component of the VMware environment. This component connects to a
vCenter Server and allows interaction with the ESX and ESXi servers and VMs.
Depending on access permissions, users can access parts or all of the virtualized environment.
For example, users can connect to the consoles of the VMs but be unable to start or stop the
VMs or change the parameters of the host on which the VMs run.
To simplify user management, the VMware environment supports integration with the
Microsoft Active Directory user database. Therefore, when users log on to the vSphere Client,
they can use their domain credentials. This feature creates a more secure environment by
avoiding several sets of credentials. (The use of multiple credentials increases the risk of people
writing them down so that they do not forget or confuse them.)
2-67
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-24
2-68
Change management is simplified, and the complexity that is usually associated with large
environments is minimized. This result is mainly because all VMs share the same hardware
components and have the same drivers. This fact eliminates the driver hunt that can be
associated with large environments with varying server types. This benefit also simplifies
troubleshooting.
Even with basic functionality, service downtime can be decreased by quickly restarting
failed VMs on an alternative host. VMware boot times are generally shorter than the boot
times for a physical server.
Operating systems
- Microsoft Windows 2008 R2
- Hyper-V role
Virtual partitions
Live migration support
- Windows failover clustering
feature
- Cluster shared volumes for virtual
hard disk storage
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-25
Overview
Hyper-V is offered as a server role that is packaged into the Microsoft Windows Server 2008
R2 installation or as a standalone server. In either case, Hyper-V is a hypervisor-based
virtualization technology for x64 versions of Windows Server 2008. The hypervisor is a
processor-specific virtualization platform.
Hyper-V isolates operating systems that run on the VMs from one another through partitioning
or logical isolation by the hypervisor. Each hypervisor instance has at least one parent partition
that runs Windows Server 2008. The parent partition houses the virtualization stack, which has
direct access to hardware devices such as NICs. This partition is responsible for creating the
child partitions that host the guest operating systems. The parent partition creates these child
partitions by using the hypercall application programming interface (API), which is exposed to
Hyper-V.
Virtual Partitions
A virtualized partition does not have access to the physical processor, nor does it manage its
real interrupts. Instead, the partition has a virtual view of the processor and runs in a guest
virtual address space. Depending on the configuration of the hypervisor, this space might not be
the entire virtual address space. A hypervisor can choose to expose only a subset of the
processors to each partition. The hypervisor, using a logical synthetic interrupt controller,
intercepts the interrupts to the processor and redirects them to the respective partition. Hyper-V
can hardware-accelerate the address translation between various guest virtual address spaces by
using an I/O memory management unit (IOMMU). An IOMMU operates independently of the
memory management hardware that the CPU uses.
Child partitions do not have direct access to hardware resources; instead, they have a virtual
view of the resources, in terms of virtual devices. Any request to the virtual device is
redirected, via the virtual machine bus (VMBus), to the devices in the parent partition, which
manages the requests. The VMBus is a logical channel that enables interpartition
2012 Cisco Systems, Inc.
2-69
communication. The response is also redirected via the VMBus. If the devices in the parent
partition are also virtual devices, then the response is redirected further until it reaches the
parent partition, where it gains access to the physical devices.
Parent partitions run a virtualization server provider, which connects to the VMBus and
processes device-access requests from child partitions. Child partition virtual devices internally
run a virtualization server client, which redirects the requests, via the VMBus, to virtualization
server providers in the parent partition. This entire process is transparent to the guest operating
system.
2-70
Live Migration
No impact on VM availability
Moved seamlessly while still online
Pre-copies the memory of the migrating VM to the destination physical
host to minimize transfer time
Live Migration
App
App
App
OS
Resource Pool
App
App
App
OS
OS
OS
Hyper-V
Hardware
2012Ciscoand/oritsaffiliates.Allrightsreserved.
OS
OS
App
OS
App
OS
App
App
App
OS
OS
OS
Hyper-V
Hardware
DCICTv1.02-26
2-71
Summary
This topic summarizes the key points that were discussed in this lesson.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
2-72
DCICTv1.02-27
Lesson 4
Objectives
Upon completing this lesson, you will be able to describe the problems that Cisco Nexus
1000V Series switches solve. You will be able to meet these objectives:
Describe how the Cisco Nexus 1000V Series switch takes network visibility to the VM
level
Describe how the VSM and VEM integrate with VMware ESX or ESXi and vCenter
App
App
App
OS*
OS*
OS*
Server Hardware
Admin
Hardware
Hardware
Access Ports
Network
Admin
Access Switches
Distribution
Switches
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-4
Before virtualization, each server ran its own operating system, usually with a single
application. The network interface cards (NICs) were connected to access layer switches to
provide redundancy. Network security, quality of service (QoS), and management policies were
created on these access layer switches and applied to the access ports that corresponded to the
appropriate server.
If a server needed maintenance or service, it was disconnected from the network, during which
time any crucial applications would need to be offloaded manually to another physical server.
Connectivity and policy enforcement were static and seldom required any modifications.
Server virtualization has made networking, connectivity, and policy enforcement much more
challenging. A feature such as VMware vMotion, in which devices running applications can
move from one physical host to another, is one example.
The challenges include the following:
2-74
Providing network visibility from the virtual machine (VM) virtual NICs (vNICs) to the
physical access switch
Providing consistent mobility of the policies that are applied to the VMs during a vMotion
event
App
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
OS
vSwitch
vSwitch
vSwitch
ESXi
Hosts
Server Hardware
Admin
Network
Admin
Hardware
Hardware
VLAN Trunks
Access Switches
Distribution
Switches
DCICTv1.02-5
The VMware server virtualization solution extends the access layer into the VMware ESX or
ESXi server with the VM networking layer. These components are used to implement server
virtualization networking:
Physical networks: Physical devices connect VMware ESX hosts for resource sharing.
Physical Ethernet switches are used to manage traffic between ESX hosts, the same as in a
regular LAN environment.
Virtual networks: Virtual devices run on the same system for resource sharing.
Physical NIC: A physical NIC is used to create the uplink from the ESX or ESXi host to
the external network. (This NIC is represented by an interface known as a VMNIC.)
2-75
Physical Server
VMs
App
App
App
App
OS*
OS*
OS*
OS*
vNICs
vSwitch
ESX
Hypervisor
Physical NICs
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-6
In the VMware environment, the network is one component that can be virtualized. The
network component includes most aspects of networking, apart from the NIC in the host. Even
then, the NICs most often play the role of uplink ports in the vSwitches that are created on the
VMware hypervisor level.
Furthermore, VMs have a selection of types of vNICs that act the same as a physical NIC
would in a physical PC or server.
2-76
App
App
App
App
App
App
App
App
App
App
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
Virtual
Access
Layer
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
Hardware
Hardware
Hardware
Hardware
Hardware
Hardware
Hardware
Hardware
Physical
Access
Layer
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-7
The switching component in the ESX or ESXi host creates the need to manage a virtual access
layer as well as the physical access layer. The virtual switching infrastructure creates this
virtual access layer, which the VMware administrators manage.
2-77
App
App
App
App
OS
OS
OS
OS
vNIC
Port Group
- Assigned to a VLAN
- VMs assigned to a port group
Uplink
VMNIC2
DCICTv1.02-8
A VMware vSwitch is a virtual construct that performs network switching between the VMs on
a particular VMware host and the external network.
Zero, one, or multiple (as many as 32) network ports can be assigned to a vSwitch. If more
ports exist than are assigned, the result is both greater bandwidth and reliability. If the vSwitch
does not have a port assigned to it, then it will switch the traffic between VMs within the host
only.
2-78
Supports:
- Trunk ports with 802.1q for VLANs
App
App
OS
OS
App
App
OS
OS
Port Group
VLAN 10
Port Group
VLAN 20
VMNIC0
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Uplink
Trunk =10, 20
VMNIC1
VMNIC2
DCICTv1.02-9
A single VMware host can have multiple vSwitches configured, if needed. Those switches will
be separate from one another, as the VMs are.
vSwitches are Layer 2 devices. They do not support routing of Layer 3 traffic. The vSwitch
performs switching of traffic between VMs that are present on the same host. Other traffic is
forwarded to the uplink port.
vSwitches support the trunking functionality, port channels, and the Cisco Discovery Protocol
for discovering and responding to the neighbor device network queries.
2-79
App
App
OS
OS
- VLAN 4095
- Tagged traffic passed up to guest operating system
App
App
OS
OS
Port Group
VLAN 10
NIC teaming
Port Group
VLAN 20
Uplink
Trunk =10, 20
Uplink
VLAN 30
VMNIC0
VMNIC1
VMNIC2
- IP hash
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-10
The figure continues to describe the standard switch operation in a virtual network.
ESXi Server
OS
OS
App
OS
App
OS
App
App
OS
OS
Configuration
vCenter Server
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-11
A vSwitch is created on the ESX or ESXi host where it is needed. A single host can have
multiple vSwitches, and every vSwitch acts as an independent entity.
A vSwitch can have as many as 1016 ports, and as many as 32 NIC ports can be assigned to it.
2-80
App
App
App
OS
OS
OS
OS
Service
Console
vNICs
Ports and Port Groups
VMotion Port
VM Port Group
vSwitches
SC Port
Virtual
Physical
Physical NICs
Physical Switches
Host
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Host
DCICTv1.02-12
The vSwitch consists of several components, some virtual and some physical:
vNICs: Present on VMs and connected to the vSwitch through its virtual ports
Port groups: Logical groupings of switch ports that have the same configuration
Physical NICs: Act as switch uplinks and provide connectivity to the external network.
2-81
Service
Console
Port
VMkernel
Port
VLAN 20
VLAN 30
VM Port Groups
Uplink Ports
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-13
The most common types of switch ports are the VM ports and port groups. These ports
represent connection points for the VMs.
Two other port types might be present on the VMware host. First is the VMkernel port. This
port type is used for advanced functions such as vMotion or access to Network Files System
(NFS) and Internet Small Computer Systems Interface (iSCSI) storage. Management traffic
also flows over this port. Second is the service console port. This port type, which is available
only on the ESX (not the ESXi) version of the host, is used to gain console-type CLI access to
the host management.
2-82
iSCSI
VMs
VMotion
Management
vSwitch
Management
iSCSI
VMotion
VMs
vSwitch
vSwitch
vSwitch
vSwitch
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-14
The figure shows an example of network segregation, achieved by the use of either VLANs or
different vSwitches. Either solution might be appropriate, depending on the circumstances.
2-83
Standard
This topic describes the benefits of VMware vNetwork Distributed Switch (vDS).
App
App
AppOS
OS
OS
vSwitch
vNetwork
2012Ciscoand/oritsaffiliates.Allrightsreserved.
App
App
AppOS
OS
OS
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
vDS
DCICTv1.02-16
VMware vSphere 4 introduced vDS, a distributed virtual switch (vSwitch). With vDS, multiple
vSwitches within an ESX or ESXi cluster can be configured from a central point. The vDS
automatically applies changes to the individual vSwitches on each ESX or ESXi host.
The feature is licensed and relies on VMware vCenter Server. The feature cannot be used for
individually managed hosts.
The VMware vDS and vSwitch are not mutually exclusive. Both devices can run in tandem on
the same ESX or ESXi host. An example of this type of configuration would be running the
Cisco Nexus 1000V VSM on a host that it is controlling. In this scenario, the VSM runs on a
vSwitch that is configured for VSM connectivity, while controlling a vDS that runs a VEM on
the same host.
2-84
Standard
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
vSwitch
vNetwork
2012Ciscoand/oritsaffiliates.Allrightsreserved.
App
App
AppOS
OS
OS
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
vDS
DCICTv1.02-16
The vDS is the next step to a vSwitch. Whereas the vSwitch is managed individually on the
host on which it was created, the vDS is managed globally across all hosts.
The vDS provides greater management uniformity because segmentation can be avoided.
The requirement for a vDS is an installed vCenter Server.
2-85
Host1
W2003EE-32-A
W2003EE-32-B
Host2
W2003EE-32-A2
Host3
W2003EE-32-B2
W2003EE-32-A3
Host4
W2003EE-32-B3
W2003EE-32-A4
W2003EE-32-B4
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
DistributedvSwitch
VM Network
Distributed
Virtual Port
Group
vDS
vDS
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-18
The vDS adds additional functionality and simplified management to the VMware network.
The vDS adds the ability to use private VLANs (PVLANs), perform inbound rate limiting, and
track VM port state with migrations. Additionally, the vDS is a single point of network
management for VMware networks. The vDS is a requirement for the Cisco Nexus 1000V
Series switch.
2-86
vSwitches are
configured manually on
each host.
DVSwitch0
VirtualSwitch:vSwitch0
PhysicalAdapters
VirtualMachinePortGroup
VirtualSwitch:vSwitch0
VMNetwork
Vmnic01000Full
7virtualmachines|MachinePortGroup
VirtualVLANID*
PhysicalAdapters
Vmnic11000Full
VirtualSwitch:vSwitch0
VMNetwork
ADServer
7virtualmachines|MachinePortGroup
VirtualVLANID*
DHCPServer
VM Network
PhysicalAdapters
Vmnic11000Full
Vmnic01000Full
VMNetwork
ADServer
WebApp1
7virtualmachines|VLANID*
DHCPServer
WebApp2
ADServer
FileServ
DB1
DB2
WebApp2
FileServ
DB1
ServiceConsolePort
ServiceConsole
DB2
Vswif0:10.1.100.10
Vmnic11000Full
AD2
DB1
FileServ
DB2
DB1
ServiceConsolePort
ServiceConsole
DB2
Vswif0:10.1.100.10
DB3
ServiceConsolePort
ServiceConsole
Vswif0:10.1.100.10
DHCP
Web1
Web2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
- DVSwitch0-DVUplinks i
- Uplink0 (4 NIC Adapters)
vmnic1 vc1. cisco.com
WebApp1
WebApp2
VLAN ID: --
WebApp1
DHCPServer
Vmnic01000Full
i
i
i
i
i
i
i
i
DCICTv1.02-19
The figure shows the conceptual difference in management for a standard vSwitch environment
versus a vDS environment. The standard vSwitch requires a separate configuration from a
separate management panel. The vDS requires only one management panel for one switch that
spans multiple hosts.
2-87
VMNetwork
VLANID:--
x
i
VirtualMachines(8)
DVSwitch0-DVUplinks
Uplink0(4NICAdapters)
vmnic1vc1.cisco.com
vmnic1vc2.cisco.com
AD1
AD2
DB1
DB2
DB3
DHCP
Web1
Web2
i
i
i
i
i
i
i
i
2012Ciscoand/oritsaffiliates.Allrightsreserved.
vmnic1vc3.cisco.com
vmnic1vc4.cisco.com
DCICTv1.02-20
PVLAN support enables broader compatibility with existing networking environments, using
PVLAN technology. PVLANs enable users to restrict communication between VMs on the
same VLAN or network segment. This feature significantly reduces the number of subnets that
are needed for certain network configurations.
PVLANs are configured on a vDS with allocations made to the promiscuous PVLAN, the
community PVLAN, and the isolated PVLAN. Within the subnet, VMs on the promiscuous
PVLAN can communicate with all VMs. VMs on the community PVLAN can communicate
among themselves and with VMs on the promiscuous PVLAN. VMs on the isolated PVLAN
can communicate only with VMs on the promiscuous PVLAN.
Note
Adjacent physical switches must support PVLANs and be configured to support the PVLANs
that are allocated on the vDS.
Network vMotion is the tracking of VM networking state, such as counters or port statistics, as
the VM moves from host to host on a vDS. This tracking provides a consistent view of a virtual
network interface, regardless of the VM location or vMotion migration history. This feature
greatly simplifies network monitoring and troubleshooting activities in which vMotion is used
to migrate VMs between hosts.
The vDS expands upon the egress-only traffic-shaping feature of standard switches with
bidirectional traffic-shaping capabilities. Egress (from the VM to the network) and ingress
(from the network to the VM) traffic-shaping policies can now be applied on port group
definitions.
Traffic shaping is useful when you want to limit the traffic to or from a VM or group of VMs.
This policy is usually implemented to protect a VM or other traffic in an oversubscribed
network. Policies are defined by three characteristics: average bandwidth, peak bandwidth, and
burst size.
2-88
DVSwitch0
i x
VMNetwork
VLANID:--
VirtualMachinePortGroup
Vmnic01000Full
VirtualMachines(8)
PhysicalAdapters
VirtualMachineNetwork
DVSwitch0-DVUplinks
Uplink0(4NICAdapters)
7virtualmachines|VLANID*
Vmnic11000Full
ADServer
vmnic1vc1.cisco.com
DHCPServer
vmnic1vc2.cisco.com
AD1
AD2
DB1
DB2
DB3
DHCP
Web1
Web2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
i
i
i
i
i
i
i
i
vmnic1vc3.cisco.com
WebApp1
vmnic1vc4.cisco.com
WebApp2
FileServ
DB1
DB2
ServiceConsolePort
ServiceConsole
Vswif0:10.1.100.10
DCICTv1.02-21
The VMware vSwitch and vDS are not mutually exclusive and can coexist within the same
vCenter management environment. Physical NICs (VMNICs) may be assigned to either the
vSwitch or the vDS on the same ESX or ESXi host.
You can also migrate the ESX service console and VMkernel ports from the vSwitch, where
they are assigned by default during ESX installation, to the vDS. This arrangement facilitates
the single point of management for all virtual networking within the vCenter data center object.
2-89
APP
APP APP
APP
APP APP
OS* APP
OS* APP OS* APP
OS* APP
OS* APP OS* APP
OS*
OS*
OS*
OS*
OS*
OS*
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-23
2-90
Hardware
OS*
OS*
APP OS*
APP OS*
APP
APP
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-24
Most configurations on a physical network switch, which connects to a host, will affect all
the VMs on that particular host.
2-91
X
Standard tools will troubleshoot only to the
physical server. Further analysis and
collaboration with the VM administrator must
be performed to complete the process.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-25
2-92
Feature
Physical Network
Virtual Network
Network visibility
Individual server
Physical server
Port configuration
Individual server
Physical server
Network configuration
Network administrator
VM and network
administrators
Security policies
Individual server
Physical server
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-26
2-93
Standard
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
vSwitch
App
App
AppOS
OS
OS
vSwitch
vNetwork
App
App
AppOS
OS
OS
App
App
AppOS
OS
OS
vDS
vNetwork Platform
2012Ciscoand/oritsaffiliates.Allrightsreserved.
App
App
AppOS
OS
OS
App App
App App
AppOS AppOS
OS OS
OS OS
Cisco Nexus 1000VThird-Party Switch
vNetwork Platform
DCICTv1.02-27
The Cisco server virtualization solution uses a technology that Cisco and VMware developed
jointly. The network access layer is moved into the virtual environment to provide enhanced
network functionality at the VM level.
This feature can be deployed as a hardware- or software-based solution, depending on the data
center design and demands. Both deployment scenarios offer VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.
VN-Link
Cisco and VMware jointly developed VN-Link technology, which has been proposed to the
IEEE for standardization. The technology is designed to move the network access layer into the
virtual environment, to provide enhanced network functionality at the VM level.
2-94
vSwitch
Model
Details
vNetwork Standard
Switch
Host based:
1 or more per
ESX host
vDS
Distributed:
1 or more per
data center
Distributed:
1 or more per
data center
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-28
With vSphere 4, VMware customers can now enjoy the benefits of three virtual networking
solutions: vSwitch, vDS, and the Cisco Nexus 1000V Series switch.
The Cisco Nexus 1000V Series switch bypasses the vSwitch by using a Cisco software switch.
This model provides a single point of configuration for the networking environment of multiple
ESX or ESXi hosts. Additional functionality includes policy-based connectivity for the VMs,
network security mobility, and a nondisruptive software model.
VM connection policies are defined in the network and applied to individual VMs from within
vCenter. These policies are linked to the universally unique identifier (UUID) of the VM and
are not based on physical or virtual ports.
2-95
Policy-based VM connectivity
Mobility of network and security properties
Nondisruptive operational model
Cisco Nexus 1000V
(Software Based)
Physical
Server
App
App
App
App
OS
OS
OS
OS
Nexus 1000V
ESX Hypervisor
Defined Policies
WEB Apps
HR
DB
Cisco
Nexus
1000V
Compliance
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-29
The Cisco Nexus 1000V Series switch provides the following important features:
2-96
Policy-based VM connectivity: The network administrator can now apply a policy to the
VM level.
Mobility: When a VM is migrated from one host to another, the network and security
policies can follow seamlessly, without network reconfiguration.
Nondisruptive: The model is nondisruptive because the vDS is already in place. The
VMware administrator continues normal tasks, but without needing to prepare the networkconfiguration side of the connection.
Policy-based VM connectivity
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
Nexus 1000V
ESX Hypervisor
ESX Hypervisor
Defined Policies
VM Connection Policy
WEB Apps
HR
vCenter Server
DB
Defined in Network
Applied in vCenter
Linked to VM UUID
Compliance
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-30
Cisco Nexus 1000V Series switch provides for policy-based connectivity of VMs. This policy
is defined and applied by network administrators rather than by VMware administrators. This
feature allows network administrators to regain control of their responsibilities and provide
support at the level of the individual VM.
The figure illustrates how a policy is defined and pushed down to vCenter. The VMware
administrator can then apply that policy to the VM on creation or modification.
OS
App
OS
App
OS
App
OS
App
OS
App
App
App
App
AppOS AppOS App
OS
OS
OS
OS
OS
OS
Nexus 1000V
ESX Hypervisor
ESX Hypervisor
Defined Policies
WEB Apps
Policy Mobility
HR
DB
Ensured VM Security
Compliance
2012Ciscoand/oritsaffiliates.Allrightsreserved.
vCenter Server
DCICTv1.02-31
2-97
All policies that are applied and defined fully support VMware mobility capabilities such as
vMotion or high availability. Policies remain applied to the VM even as it moves from one host
to another.
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
Nexus 1000V
- Improved scalability
- VM-level visibility
ESX Hypervisor
ESX Hypervisor
Network benefits
- Unified network management and
operations
- Improved operational security
vCenter Server
DCICTv1.02-32
Cisco Nexus 1000V Series Switches can be introduced to an existing virtual environment. With
proper planning and deployment, the migration from VMware native virtual networking can be
nondisruptive for running services.
VMware administrators reduce their workload by relinquishing control of virtual networking to
network administrators.
Cisco Nexus 1000V Series Switches introduce unified management with the rest of the IP
network by using the same familiar techniques and commands that are available on other
network platforms.
2-98
Layer 2
- VLAN, PVLAN, 802.1q
- Link Aggregation Control Protocol (LACP)
- vPC host mode
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
Security
- Layer 2, 3, 4 access lists
Nexus 1000V
ESX Hypervisor
ESX Hypervisor
- Port security
vCenter Server
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-33
Cisco Nexus 1000V Series Switches also introduce additional functionality to virtual network
connectivity that standard vSwitches or vDSs do not have or have to a lesser extent:
Monitoring tools such as Switched Port Analyzer (SPAN), Encapsulated Remote SPAN
(ERSPAN), and Wireshark are a part of the Cisco Nexus 1000V Series switch.
NetFlow provides visibility into how, when, and where network traffic is flowing.
2-99
VSM
CLI into Cisco Nexus 1000V
Leverages Cisco NX-OS
Controls multiple VEMs as a
single network device
Cisco VEM
VM1
VM2
VM3
Cisco VEM
VM4
VM5
VM6
VM7
Cisco VEM
VM7
VM9
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VM10
VM11
VM12
DCICTv1.02-35
The VSM is a virtual equivalent of the supervisor modules that can be found in other Cisco
Nexus Operating System (NX-OS) devices. The VSM provides the platform on which Cisco
NX-OS runs and that interacts with other components that are part of the Cisco Nexus 1000V
Series switch.
Management access to the VSM is provided through a Cisco NX-OS CLI. This CLI has the
same syntax and behavior as the CLI on other Cisco Nexus devices.
All the line cards, such as VEMs, that connect to the VSM behave as a single network device.
The VSM can reside on a VM or a Cisco Nexus 1010 appliance.
2-100
VEM
Cisco VEM
VM1
VM2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VM3
Cisco VEM
VM4
VM5
VM6
VM7
Cisco VEM
VM7
VM9
VM10
VM11
VM12
DCICTv1.02-36
The VEM is the virtual equivalent of a line card of a standard switch. The VEM resides on
every VMware host on the hypervisor layer. A VEM provides connectivity among the VMs and
between the VMs and the outside network, through the physical NIC ports in the host. Multiple
VEMs that communicate with one VSM or a set of VSMs correspond to one logical switch.
VEMs on different hosts do not have a direct line of communication to one another. Rather,
they require an outside switch to link them. The VEM-to-VSM communication path carries
only control traffic.
2-101
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-37
The Cisco Nexus 1000V Series virtual chassis is an expression that encompasses the Cisco
Nexus 1000V Series components such as VSMs and VEMs.
vsm#
Mod
--1
2
3
show module
Ports
Module-Type
Model
Status
------------------------------------ ------------------ -----------0
Virtual Supervisor Module
Cisco Nexus1000V
active *
0
Virtual Supervisor Module
Cisco Nexus1000V
ha-standby
248
Virtual Ethernet Module
NA
ok
Cisco VSMs
Cisco VEM
VM
1
VM
2
2012Ciscoand/oritsaffiliates.Allrightsreserved.
VM
3
VM
4
Cisco VEM
VM
5
VM
6
VM
7
VM
8
DCICTv1.02-38
The Cisco Nexus 1000V Series virtual chassis behaves as if it were a physical device with
multiple line cards. For example, the show module command in the Cisco Nexus 1000V CLI
displays the VSMs and VEMs in the same way that it would display supervisors and line cards
on a Cisco Nexus 7000 Series switch.
2-102
Cisco VSMs
Control: Heartbeats
2012Ciscoand/oritsaffiliates.Allrightsreserved.
Cisco VEM
DCICTv1.02-39
Communication between the VSM and VEM is provided through two distinct virtual interfaces:
the control and packet interfaces.
The control interface carries low-level messages to each VEM, to ensure proper configuration
of the VEM. A 2-second heartbeat is sent between the VSM and the VEM, with a 6-second
timeout. The control interface maintains synchronization between primary and secondary
VSMs. The control interface is like the Ethernet out-of-bound channel in switches such as the
Cisco Nexus 7000 Series switch.
The packet interface carries network packets, such as Cisco Discovery Protocol or Internet
Group Management Protocol (IGMP) control messages, from the VEM to the VSM.
You should use one or two separate VLANs for the control interface and for the packet
interface.
Being VLAN interfaces, the control and packet interfaces require Layer 2 connectivity.
2-103
Cisco VSMs
vCenter Server
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-40
Communication between the VSM and vCenter is provided through the VMware Virtual
Infrastructure Methodology (VIM) application programming interface (API) over Secure
Sockets Layer (SSL). The connection is set up on the VSM and requires installation of a
vCenter plug-in, which is downloaded from the VSM.
After communication between the two devices is established, the Cisco Nexus 1000V vDS is
created in vCenter.
This interface is known as the out-of-band (OOB) management interface. Although not
required, best practice is to have this interface, vCenter, and host management in the same
VLAN.
2-104
Port profiles are the Cisco Nexus 1000V variant of port groups from
VMware.
Although you can configure individual ports on the Cisco Nexus 1000V,
doing so through port profiles is preferred.
The next figure shows an example of a port profile configuration.
VSM
vCenter
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-41
Port profiles are used to configure interfaces in the Cisco Nexus 1000V Series switch with a
common set of configuration commands. A port profile can be assigned to multiple interfaces.
Any changes to a port profile are automatically propagated across all interfaces that are
associated with that port profile.
In vCenter Server, the port profile is represented as a port group. Both virtual and physical
interfaces are assigned in vCenter Server to a port profile and perform these functions:
Note
Any manual configuration of an interface overrides the port profile configuration. Manual
configuration is not recommended for general use but rather for tasks such as quick testing
of a change.
2-105
Port profiles correspond to port groups within VMware. By default, the port group created
within VMware for each port profile has the same name. VMware administrators use the port
group to assign network settings to VMs and uplink ports.
N1000v-VSM(config)# port-profile pod1VMdata
N1000v-VSM(config-port-prof)# switchport mode access
N1000v-VSM(config-port-prof)# switchport access vlan 102
N1000v-VSM(config-port-prof)# vmware port-group pod1VMdata
N1000v-VSM(config-port-prof)# no shut
N1000v-VSM(config-port-prof)# state enabled
N1000v-VSM(config-port-prof)# vmware max-ports 12
VSM
vCenter
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-42
When a port profile is created and enabled, a corresponding port group is created in vCenter.
By default, this port group has the same name as the profile, but this name is configurable.
VMware administrators use the port profile to assign network settings to VMs and uplink ports.
When a VMware ESX or ESXi host port (VMNIC) is added to a vDS that the Cisco Nexus
1000V Series switch controls, an available uplink port group is assigned and those settings are
applied. When a NIC is added to a VM, an available VM port group is assigned. The network
settings that are associated with that profile are inherited.
A NIC in VMware is represented by an interface that is called a VMNIC. The VMNIC number
is allocated during VMware installation.
2-106
Virtual Center
2. Push
1. Produce
(Network Admin)
3. Consume (Server
Admin)
OS
OS
OS
OS
VM
VM
VM
VM
ESX Server
VEM
2012Ciscoand/oritsaffiliates.Allrightsreserved.
ESX Server
VEM
VSM
DCICTv1.02-43
Cisco Nexus 1000V Series Switches provide an ideal model in which network administrators
define network policies that virtualization or server administrators can use as new VMs are
created. Policies that are defined on the Cisco Nexus 1000V Series switch are exported to
vCenter and assigned by the server administrator as new VMs require access to a specific
network policy. This concept is implemented on the Cisco Nexus 1000V Series switch by using
a feature called port profiles. The Cisco Nexus 1000V Series switch with the port profile
feature eliminates the requirement for the server administrator to create or maintain a vSwitch
and port group configurations on any of their ESX or ESXi hosts.
Port profiles separate network and server administration. For network administrators, the Cisco
Nexus 1000V feature set and the ability to define a port profile by using the same syntax as for
existing physical Cisco switches help to ensure consistent policy enforcement without the
burden of managing individual switch ports. The Cisco Nexus 1000V solution also provides a
consistent network management, diagnostic, and troubleshooting interface to the network
operations team, allowing the virtual network infrastructure to be managed like the physical
infrastructure.
2-107
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-44
For consistent workflow, continue to choose Port Groups when configuring a VM in vCenter.
2-108
Task
VMware Admin
Network Admin
vSwitch config
N/A
N/A
vCenter-Based
N/A
Per vSwitch
N/A
vCenter-Based
N/A
Security
N/A
N/A
Visibility
vCenter
N/A
Management
vCenter
N/A
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-45
The figure shows typical administrative tasks before the introduction of vDS or Cisco Nexus
1000V Series switch. There is no participation by the network administrator.
Task
VMware Admin
Network Admin
vSwitch Config
Automated
Same as Physical
Network
Automated
Policy-Based
UnchangedvCenterBased
N/A
Automated
UnchangedvCenterBased
N/A
Security
Policy-Based
Visibility
VM-Specific
VM-Specific
Management
UnchangedvCenterBased
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-46
The figure redefines the administrative tasks with the introduction of the vDS and Cisco Nexus
1000V. Some tasks are unchanged, whereas others are now the responsibility of the network
team.
2-109
Summary
This topic summarizes the key points that were discussed in this lesson.
To provide connectivity for VMs in the virtual server environment, vendors such as
VMware have developed a software-based switch called a vSwitch, which resides in
the physical server host. VMware administrators create, modify, and manage this
vSwitch, which is outside of the control of the network team in the data center.
To enhance the capabilities of the vSwitch, VMware developed the vDS. This
provided more functionality, but was still managed by the VMware administrators,
through the vCenter Server GUI.
To provide better visibility to the network team for the growing virtual server access
layer, Cisco worked with VMware to develop Ciscos first software-based switch, the
Cisco Nexus 1000V switch. This switch is managed by the network team, providing
the capability of pushing network-based policies down to the virtual server layer.
The Cisco Nexus 1000V switch comprises a VSM and a VEM. These modules create
the vSwitch. This vSwitch integrates into the VMware environment and has
connectivity to the vCenter Server. Policies that the network team creates can be
pushed down to vCenter for applying to the virtual servers that the VMware
administrators manage.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
2-110
DCICTv1.02-47
Lesson 5
Objectives
Upon completing this lesson, you will be able to verify the initial setup and operation for Cisco
Nexus 1000V Series Switches. You will be able to meet these objectives:
Identify the commands that are used to verify the initial configuration and module status on
the Cisco Nexus 1000V Series switch
Identify how to verify VEM status on the VMware ESX or ESXi host
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-4
There are several ways to install and deploy the VSM. The preferred method is to use an Open
Virtualization Appliance (OVA) file. This method provides the highest degree of guidance and
error-checking for the user.
All other methods are less streamlined and require the administrator to be knowledgeable.
However, these other methods work well in certain situations.
Open Virtualization Format (OVF) files are standardized file structures that are used to deploy
virtual machines (VMs). You can create and manage OVF files by using the VMware OVF
Tool.
OVA files are similar to OVF files.
2-112
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-5
After the Cisco Nexus 1000V Series switch has been installed on a VM or Cisco Nexus 1010
appliance, the initial configuration dialog box is displayed. The network administrator performs
the initial configuration to provide a basic configuration for the Cisco Nexus 1000V Series
Switch. All further configurations, such as port profiles, are configured at the CLI in
configuration mode.
To verify the initial configuration and subsequent modifications to the configuration, use the
show running-config command at the Cisco Nexus 1000V CLI.
2-113
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-6
To establish a connection between the Cisco Nexus 1000V Series switch and vCenter, the
network administrator configures a software virtual switch (SVS) connection. This connection
must be in place for the Cisco Nexus 1000V Series switch to push configuration parameters
such as port profiles to vCenter.
To verify that the SVS connection is in place, use the show svs connections command.
2-114
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-7
You can perform further verification of the connection between the Cisco Nexus 1000V Series
switch and vCenter, use the show svs domain command. Each Cisco Nexus 1000V Series
switch uses one domain ID. All ESX or ESXi hosts that have a VEM installed listen to updates
from one domain ID: the virtual chassis in which they reside. This domain ID is in all updates
from vCenter to the ESX or ESXi hosts on which the VEMs are installed.
To verify the domain ID parameters, use the show svs domain command on the Cisco Nexus
1000V Series switch.
2-115
Ports
Module-Type
---
-----
Model
Status
Nexus1000V
active *
248
NA
ok
248
NA
ok
...
Mod
MAC-Address(es)
---
--------------------------------------
----------
Serial-Num
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
NA
02-00-0c-00-03-00 to 02-00-0c-00-03-80
NA
02-00-0c-00-04-00 to 02-00-0c-00-04-80
NA
...
Slots 1 and 2 are reserved for VSMs. New host VEMs begin at slot 3.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-8
The output of the show module command shows the primary supervisor in slot 1, the highavailability standby supervisor in slot 2, and the very first ESX or ESXi host that has been
added to the Cisco Nexus 1000V Series switch instance in slot 3.
After a host has been added and the VEM has been successfully installed, the VEM appears as
a module on the VSM CLI. This appearance is similar to modules that are added to a physical
chassis.
Note
2-116
Slots 1 and 2 are reserved for VSMs. New host VEMs start from slot 3.
The show module vem map command shows the status of all
VEMs as well as the universally unique identifier (UUID) of the
host on which the VEM runs.
VSM-1# show module vem map
Mod
---
Status
-----------
UUID
----
powered-up
34343937-3638-3355-5630-393037415833
powered-up
34343937-3638-3355-5630-393037415834
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-9
The show module vem map command shows the status of all VEMs, as well as the universally
unique identifier (UUID) of the host on which the VEM runs. This command can be used to
verify that the VEM is installed and tied to the UUID of the host.
2-117
After the VSM connects properly, you should see output that
shows the creation of a vDS. The vDS also appears in the
vCenter Inventory networking pane.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-11
After the VSM connects to vCenter, you see the vNetwork Distributed Switch (vDS) appear in
the vCenter Networking inventory panel. You should see the port groups that you configured
for control, management, and packet traffic. (These port groups are required to provide
connectivity between vCenter, the VSM, and the VEMs.) Some other port groups are created
by default. One is the Unused_Or_Quarantined DVUplinks port group, which connects to
physical NICs. Another is the Unused_Or_Quarantined VMData port group, which faces the
VM.
2-118
Num Ports
128
128
Num Ports
256
Used Ports
8
3
Used Ports
40
Configured Ports
128
128
Configured Ports
256
MTU
1500
1500
MTU
1500
Uplinks
vmnic5
vmnic4
Uplinks
vmnic0
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-12
You can verify the VEM status on the CLI of the ESX or ESXi host. To perform the
verification, open a connection to the host and log in, using the correct credentials.
At the CLI of the host, run the vem status command. This command verifies that the VEM
module is loaded and that the VEM Agent is running on this host. The command also confirms
which interface is being used as the uplink.
2-119
VSM Port
17
Eth3/1
Admin Link
UP
State
UP
PC-LTL
F/B*
SGID
0
Vem Port
vmnic0
VSM Port
17
Eth3/1
Mode
T
VLAN
VLAN
State
FWD
Allowed
Vlans
11-14,190
DCICTv1.02-13
The following commands can be used on the CLI of the ESX or ESXi host to provide further
verification of the uplink interfaces on that host:
2-120
vemcmd show port: This command verifies the VEM port that is used on the host and the
Cisco Nexus 1000V Series switch. The command provides details of the port state and
whether any issues need to be highlighted.
vemcmd show port vlans: In the figure, the vemcmd show port command identified that
the uplink port was blocked. You can use this command to verify which VLANs are carried
across the uplink and whether any VLANs are missing. In the figure, the uplink is blocking
for VLAN 1 because it is being used only at one end of the connection.
2: 44454c4c-5400-104a-8036-c7c04f43344a
DCICTv1.02-14
The following command is used to verify that the parameters on the ESX or ESXi host for the
VEM match the configuration on the Cisco Nexus 1000V Series switch:
Card name
Card domain ID
Card slot
Note
The next two figures show some of the output of this command.
2-121
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-15
The output in the figure is a continuation of the output from the previous figure.
3885056
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-16
The output in the figure is a continuation of the output from the previous two figures.
2-122
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-18
Use the show port-profile name name command to verify the profile configuration and
parameters. From this command, you can check which switchport mode this port profile is
using. You can also verify which VLANs are being used and which interfaces are assigned to
this port profile.
2-123
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-19
To add a VM to a VSM port group, right-click the VM and choose Edit Settings.
2-124
The pod1VMdata port profile has been created and enabled, and now
becomes available in the port group configuration of VMware vCenter.
The VMware administrator needs to assign proper port profiles to VMs
to achieve desired connectivity.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-20
After the port profile has been created on the Cisco Nexus 1000V Series switch and pushed
down to vCenter, it is available for the VMware administrator to use when creating or
modifying VMs.
The figure shows that a port profile pod1VMdata has been created on the Cisco Nexus 1000V
Series switch. As the figure shows, the VMware administrator is modifying the VM properties.
In the Network Connection section, the administrator has chosen the pod1VMdata port profile
as a port group. This port group can be applied to the VM.
2-125
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-21
The figure describes the process of verifying the VMware data port profile configuration. From
within VMware vSphere, choose Inventory > Networking within the navigation pane. The
network inventory objects, including the newly created port profile pod1VMdata, appear. The
vSphere Recent Tasks window shows that the creation of the new port profile has been
completed.
2-126
2012Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-22
Uplink port profiles are created on the Cisco Nexus 1000V Series switch and are pushed to
vCenter so that the virtual switch can provide external connectivity for the VMs that reside on
that host.
Use the same verification method that you used for VM port profiles to verify that the port
profile is available for the VMware administrator to use.
2-127
Summary
This topic summarizes the key points that were discussed in this lesson.
The status of the VSM and VEMs in the Cisco Nexus 1000V switch
virtual chassis can be verified at the CLI on the switch.
To verify the VSM creation and the VEM installation on the ESX or ESXi
host, check the vCenter GUI and the CLI of the host.
When creating or modify VMs on the vCenter Server, verify that the vDS
port groups created on the Cisco Nexus 1000V switch are available for
use. On the Cisco Nexus 1000V switch CLI, use the show port-profile
command to verify which VMs are using which port profiles.
2012Ciscoand/oritsaffiliates.Allrightsreserved.
2-128
DCICTv1.02-23
Module Summary
This topic summarizes the key points that were discussed in this module.
Features such as VDCs on the Cisco Nexus 7000 Series switch and
NIV, are forms of network device virtualization.
Storage virtualization is the ability to present virtual storage to hosts
and servers and map the storage request to physical storage at the
back-end.
Server virtualization is the ability to host VMs on physical hosts for
increased utilization and scalability in the data center.
The Cisco Nexus 1000V switch is a software-based switch developed by
Cisco in collaboration with VMware, to provide increased visibility and
control at the virtual access layer to network administrators.
The Cisco Nexus 1000V switch comprises a VSM and VEM. The VEM
resides on the physical ESX or ESXi host and can be verified at the CLI
and within vCenter.
2011Ciscoand/oritsaffiliates.Allrightsreserved.
DCICTv1.02-1
Network device virtualization includes features such as virtual device contexts (VDCs) and
Network Interface Virtualization (NIV). VDCs are used on the Cisco Nexus 7000 Series
Switch, to provide the ability to consolidate onto few physical switches yet still retain the
separation of domains. The Cisco Nexus 7000 Series Switch can be partitioned into a maximum
of four VDCs. Each VDC is a logical switch that resides on one physical switch. This feature
allows customers to consolidate various physical switches onto one physical infrastructure for
greater flexibility and a reduced footprint in the data center. Each VDC runs its own processes
and has its own configuration. The controlling VDC is VDC1, which is the default VDC.
The Cisco Nexus 2000 Fabric Extender is an external module that can be connected to a Cisco
Nexus 7000 or 5000 Series Switch for greater port density without increased management
overhead. The parent switch makes all the switching decisions, and the Cisco Nexus 2000
Fabric Extender provides the port count. Through the use of NIV and the VN-Link technology,
the parent switch can recognize the remote ports on which traffic is sourced. From this
determination, the switch can make the correct policy and switching decision.
Virtualization provides a process for presenting a logical grouping or subset of computing
resources. Storage virtualization is used to create a common pool of all storage resources and to
present a subset of those resources to the host or server. Storage virtualization is basically a
logical grouping of logical unit numbers (LUNs).
Three main storage-system virtualization options are available: host-, array-, and networkbased virtualization.
Several challenges exist inside the data center: management of physical resources, space
constraints, cabling, power, and cooling, to name a few. To help reduce some of those
challenges, servers can be moved from a physical environment to a virtual environment.
Available technologies include VMware ESX or ESXi servers, Microsoft Hyper-V, and Linux
enterprise virtualization. By virtualizing servers, companies can get better utilization of
2012 Cisco Systems, Inc.
2-129
physical equipment and reduce their physical footprint, cabling, and power and cooling
requirements. Virtual servers have the same capabilities and requirements as physical servers
but can be managed through a center management infrastructure such as VMware vSphere
vCenter Server.
To provide connectivity for the virtual server infrastructure, virtual switches can be used inside
the physical host on which the virtual servers reside. VMware provides a software-based virtual
Ethernet switch (vSwitch) or a vNetwork Distributed Switch (vDS). The vSwitch is a
standalone switch that is managed individually. The vDS is a distributed switch that spans
multiple hosts. This switch is managed through vCenter Server.
One disadvantage of the VMware implementation is that VMware administrators manage the
virtual switch layer. These technicians do not always have a full understanding of network
requirements and policies. In addition, the VMware-based switch does not necessarily have all
the same features and policies as a normal network-based switch.
Cisco has worked with VMware to develop a software-based switch that can take advantage of
the vDS architecture. The Cisco Nexus 1000V Series Switch provides a full Cisco Nexus
Operating System (NX-OS) CLI, with all the features of a regular network-based switch, and is
managed by the network team. Therefore, the network team has full visibility to the virtual
access layer. The team can impose policies to the virtual server port as well as to the physical
port of the host on which the virtual server resides.
The Cisco Nexus 1000V Series Switch comprises one or two Virtual Supervisor Modules
(VSMs) and one or more Virtual Ethernet Modules (VEMs). The VSM is installed on either a
virtual machine (VM) or a Cisco Nexus 1010 appliance. The VEM is installed on the ESX or
ESXi host. Port profiles are created on the Cisco Nexus 1000V Series Switch, to provide
network policies that the virtual servers can use. These port profiles are known as port groups
on the vCenter Server. The VMware administrator applies these port groups to VMs that are
created or modified. The port profiles are pushed from the Cisco Nexus 1000V Series Switch to
vCenter, by using the secure connection that is established between the devices.
Verification of VSM and VEM installation can be performed on the Cisco Nexus 1000V Series
Switch, vCenter, and the CLI of the ESX or ESXi host. Port profile configuration can be
verified on the Cisco Nexus 1000V Series Switch, and port group creation can be verified on
the vCenter Server.
References
For additional information, refer to these resources:
2-130
Cisco Systems, Inc. Cisco Nexus 7000 Series NX-OS Virtual Device Context Configuration
Guide. San Jose, California, October 2011.
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nxos/virtual_device_context/configuration/guide/vdc_nx-os_cfg.html.
Cisco Systems, Inc. Cisco Nexus 2000 Series Fabric Extender Software Configuration
Guide. San Jose, California, April 2012.
http://www.cisco.com/en/US/docs/switches/datacenter/nexus2000/sw/configuration/guide/r
el_6_0/b_Configuring_the_Cisco_Nexus_2000_Series_Fabric_Extender_rel_6_0.html.
Cisco Systems, Inc. Cisco Nexus 5000 Series NX-OS Layer 2 Switching Configuration
Guide, Release 5.1(3)N1(1), Configuring the Fabric Extender.
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_n1_1/b_
Cisco_n5k_layer2_config_gd_rel_513_N1_1_chapter_010100.html.
Cisco Systems, Inc. Cisco Nexus 1000V Series Switches Configuration Guides.
http://www.cisco.com/en/US/products/ps9902/products_installation_and_configuration_gui
des_list.html.
2-131
2-132
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)
How many VDCs are supported on a Cisco Nexus 7000 Series switch? (Source:
Virtualizing Network Devices)
A)
B)
C)
D)
Q2)
Which two features can be configured only in the default VDC? (Choose two.)
(Source: Virtualizing Network Devices)
A)
B)
C)
D)
E)
Q3)
180
120
90
60
Which command must you enable in the default VDC on a Cisco Nexus 7000 Series
switch to enable a Cisco Nexus 2000 Fabric Extender to be attached and configured in
a nondefault VDC? (Source: Virtualizing Network Devices)
A)
B)
C)
D)
Q7)
changeto vdc
moveto vdc
skipto vdc
switchto vdc
How many days are available in the grace license period on the Cisco Nexus 7000
Series switch? (Source: Virtualizing Network Devices)
A)
B)
C)
D)
Q6)
14
18
1, 3, 5, and 7
1 and 2
Which command do you use to move from the default VDC to a nondefault VDC on
the Cisco Nexus 7000 Series switch? (Source: Virtualizing Network Devices)
A)
B)
C)
D)
Q5)
VLANs
CoPP
VDC resource allocation
VRFs
management IP address
Q4)
4
3
2
1
feature-set fex
feature fex
install feature fex
install feature-set fex
In which mode are interfaces that connect to the Cisco Nexus 2000 Fabric Extender
listed in the show interface brief command on a Cisco Nexus 5000 or 7000 Series
switch? (Source: Virtualizing Network Devices)
Cisco Data Center Virtualization
2-133
A)
B)
C)
D)
Q8)
Which feature do you configure on storage arrays to provide basic LUN-level security?
(Source: Virtualizing Storage)
A)
B)
C)
D)
Q9)
128
64
32
16
What defines a logical group of ports with the same configuration as on the VMware
vCenter Server? (Source: Using the Cisco Nexus 1000V Series Switch)
A)
B)
C)
D)
2-134
1
2
3
4
How many VEMs does the Cisco Nexus 1000V virtual chassis support? (Source: Using
the Cisco Nexus 1000V Series Switch)
A)
B)
C)
D)
Q13)
NIC teaming
vSwitch
port groups
vDS
How many VSMs can be installed in the Cisco Nexus 1000V virtual chassis? (Source:
Using the Cisco Nexus 1000V Series Switch)
A)
B)
C)
D)
Q12)
partial virtualization
paravirtualization
full virtualization
host virtualization
Q11)
LUN masking
LUN mapping
LUN zoning
LUN access control
Which form of server virtualization simulates some, but not all, of the hardware
environment? (Source: Virtualizing Server Solutions)
A)
B)
C)
D)
Q10)
fabric
access
trunking
host
port profiles
network connection policy
port groups
group policies
Q14)
What is the maximum number of network ports that can be assigned to a VMware
vSwitch? (Source: Using the Cisco Nexus 1000V Series Switch)
A)
B)
C)
D)
Q15)
Which method is preferred for installing the Cisco Nexus 1000V Series switch?
(Source: Verifying Setup and Operation of the Cisco Nexus 1000V Series Switch)
A)
B)
C)
D)
Q16)
On which two devices can you install a Cisco Nexus 1000V Series switch? (Choose
two.) (Source: Verifying Setup and Operation of the Cisco Nexus 1000V Series
Switch)
A)
B)
C)
D)
E)
3
2
3 or 2
1
Which three port groups need to be configured on vCenter to support the Cisco Nexus
1000V Series switch? (Source: Verifying Setup and Operation of the Cisco Nexus
1000V Series Switch)
A)
B)
C)
D)
Q18)
OVA file
OVF file
manual installation
ISO file
Which is the first slot in which a VEM is installed on the Cisco Nexus 1000V virtual
chassis? (Source: Verifying Setup and Operation of the Cisco Nexus 1000V Series
Switch)
A)
B)
C)
D)
Q17)
64
32
16
8
desktop computer
standalone server
VM
Cisco Nexus 1010 appliance
Windows 2008 Server
2-135
2-136
Q1)
Q2)
B, C
Q3)
Q4)
Q5)
Q6)
Q7)
Q8)
Q9)
Q10)
Q11)
Q12)
Q13)
Q14)
Q15)
Q16)
Q17)
Q18)
C, D