CCDP
CCDP
CCDP
Two network models will be analyzed in this chapter: (1) the Cisco Hierarchical Network
model, a classic model with a long history in Cisco instruction; and (2) the Cisco Enterprise
Architecture model, an expanded and evolved new and improved model.
About Us
This is a free bonus site for members of www.howtonetwork.com
Copyright
The content on this copyright Reality Press Ltd.
Real-time communications
Mobility services
Storage services
Application delivery
Management services
Virtualization technology
Transport services
The top layer forms the middleware and applications platform, which
includes the following:
Commercial applications
In-house developed applications
Software as a service (SaaS)
Composite applications:
o Product lifecycle management (PLM)
o Customer relationship management (CRM)
o Enterprise resource planning (ERP)
o Human capital management (HCM)
o Supply chain management (SCM)
o Procurement applications
o Collaboration applications (e.g., Instant Messaging, IP
contact center, and video delivery)
Functionality
Supports Enterprise operational requirements
Scalability
Expansion and growth of organizational tasks, as it separates the
functions into layers and components
Facilitates mergers and acquisitions
Modularity
Hierarchical design that allows network resources to be easily
added during times of growth
Availability of services from any location in the Enterprise, at
any time.
The SONA network is built from the ground up with redundancy and
resiliency to prevent network downtime. The goal of SONA is to
provide high performance, fast response times, and throughput by
ensuring Quality of Service (QoS) on an application-by-application
basis. The SONA network is configured to maximize the throughput
of all critical applications, such as voice and video. SONA also
provides built-in manageability, configuration management,
performance monitoring, fault detection, and analysis tools, in
addition to an efficient design with the goals of reducing the total cost
of ownership (TCO) and maximizing the company’s existing
resources when the application demands increase.
PPDIOO Lifecycle Methodology
Availability
Scalability
Performance
Reliability
Security
1. 1. Prepare
2. 2. Plan
3. 3. Design
4. 4. Implement
5. 5. Operate
6. 6. Optimize
The network lifecycle might not go through all of these six phases in
the order shown in Figure 1.2 above without some type of iterative
process, where the flowchart can be modified based on changing
technologies, budget, infrastructure, business needs, or business
structure. For example, after the implement phase, the network
designer might need to go back to the Plan phase or the Design phase
and make changes at that level. Unplanned actions can happen,
especially in the Operate phase. The following sections offer a more
detailed description of each phase in the PPDIOO lifecycle:
Operate phase: The Operate phase is the final proof that the
network design was implemented properly. Performance
monitoring, fault detection, and operational parameters will be
confirmed in this phase. This will also provide the data used for
the last phase.
The next section will focus on the primary goals of a network designer
in terms of the Design phase and will analyze some of the design
methodologies used in the PPDIOO process.
Like the other phases in the PPDIOO lifecycle, the Design phase is
based on the requirements of the company as they align with the
technical requirements. The Design phase includes the following
features:
High availability
Assurance of redundancy
Failover and fallback mechanisms, both at the software level
and the hardware level under network-enabled devices
Scalability (i.e., the ability to grow the project based on future
growth models)
Security
Performance models and goals
These features can also be considered the goals of the network design
phase. In this particular phase the team involved in the design process
might request input from different areas of the company, from security
professionals, or from various department leaders. The information
gathered will be compiled, logical and physical diagrams will be
created, and analysis and reports will be generated based on the
conclusions. The initiated project plan will be modified, updated, and
eventually finalized during the Design phase, as the next phase
involves implantation and no more modifications should be made to
the plan during that phase.
The next step is to see how the identified network applications and
services map to the organization’s goals. These goals must align with
the IT infrastructure, and they should include improving customer
support in the case of Internet service providers (ISPs) or improving
service desk support if internal users are served. Among the objectives
that must be analyzed in this phase are decreasing costs and increasing
competitiveness in a specific field or industry.
Budget constraints
Personnel constraints (the Prepare, Plan, and Design phases
might have less resources allocated to them compared to the
Implement and Operate phases)
Organizational policy constraints
Security policy constraints (open source solutions may be
preferred to proprietary solutions, such as EIGRP)
Need for external contractors and consultants
Scheduling constraints (i.e., timeframe)
All of the tools presented above can be used in the third step of the
information gathering phase, which is performing traffic analysis. In
this phase the designer should investigate the following:
Layer 3 topology
Layer 2 topology
Network services
Applications
Network modules
The next step is to isolate the network services and map them onto a
separate document that should include the following:
The fourth aspect includes the applications that run on the network:
Backbone
Network management
PSTN access
Corporate Internet
Public access
WAN
Internal server farm
LAN (Access Layer)
The next step is identifying the components and properties for each
network device (e.g., router, switch, firewall, etc.), such as the
following:
Device model
CPU/memory
Resource utilization
IOS version
Device configuration
Routing tables
Interfaces
Modules/slots
Performance
Functionality
Flexibility
Capacity
Availability
Scalability
The network designer’s goal in this phase should be to develop a
systematic approach that takes into consideration the organization’s
needs, goals, policies and procedures, technical goals and constraints,
and the existing and future network infrastructure. This includes
physical models, logical models, and functional models.
The best approach in this phase, and the one recommended by Cisco
for a medium-sized network to a large Enterprise Campus design, is
the top-down approach. Using this approach presents an overview of
the design before getting down to the design details. This entails
beginning with the Application Layer (Layer 7) of the OSI model and
then moving down through the Presentation Layer, the Session Layer,
the Transport Layer, the Network Layer, the Data-link Layer, and,
finally, the Physical Layer (Layer 1).
Figure 1.7 starts at the top with the applications and services, which
includes the Application, Presentation, and Session Layers. Based on
the requirements for the applications and the way they match the
organization’s goals, a network infrastructure design and an
infrastructure services design to meet the application requirements of
the organization should be applied. This includes the data, the types of
traffic and services, and what types of design and network services
will meet the needs of those applications.
Once the goals are met, the network should be modularized, taking a
modular approach and including the Core, Distribution, and Access
Layers of the network, the data center, the server farm, the branches,
and the Internet connectivity layer. Next, apply the decisions made for
the infrastructure and services to different modular areas of the
network by dealing with certain segments of the network at a time.
The final steps of the design process within the PPDIOO lifecycle
include the following:
A pilot site is a live location that serves as a test site before the
solution is deployed. This is a real-world approach to discovering
problems before deploying the network design solution to the rest of
the internetwork. A pilot network is used to test and verify the design
before the network is implemented or launched. The design should
also be tested on a subset of the existing network infrastructure. The
pilot test might be done within a particular module or a particular
building or access area before extending the network to other areas.
Successful testing will prove that the Plan, Preparation, and Design
phases are on target and will facilitate moving on to the Implement
phase. Sometimes a success in this step concludes the network
designer’s job, who will then hand over the project to the personnel or
outside consultants who will handle the implementation of the
hardware and software solutions.
A failure in this phase does not mean the entire project has failed. It
simply means that some corrections must be made to the actual
design, after which the prototype/pilot test must be repeated until it is
considered a success. Any failures that happen during the testing
phase require going back to the iterative process and correcting the
planning, preparation, or design aspects and repeating the
pilot/prototype test to correct any weaknesses that might have a
negative effect on the implementation process.
1.Introduction
3.Existing Infrastructure
3.5 Recommendations
4.Intelligence Services
4.1 Applications
4.2 Services
4.3 Analysis
4.4 Recommendations
5.Solution Design
5.4 Recommendations
6.Prototype Network
6.3 Recommendations
7.Implementation Plan
This model was created so that the construction of the IIN would be
easier to understand. Cisco has always tried to make efficient and
cost-effective networks that have a modular structure so they can be
easily divided into building blocks. The modular network design
facilitates modifications in certain modules, after implementation, and
makes it easy to track faults in the network.
These three layers might sometimes collapse into each other to make
networking easier, especially in small networks. For example, a
network might have two real layers: an Access Layer and a collapsed
Distribution and Core Layer. Although the Distribution and Core
Layers may be considered united in this scenario, there is a clear
difference between the functionalities of the Distribution Layer and
the functionalities of the Core Layer within the compressed layer.
The Core Layer is the backbone of the network and its main purpose
is to move data as fast as possible through Core Layer devices. The
design purpose in the Core Layer is providing high bandwidth and
obtaining a low overhead. The Core Layer contains Layer 3 switches
or high-speed routers, which are used to obtain wire speed. For
example, if the Core Layer devices have 10 Gbps interfaces, the
devices should send data at 10 Gbps without delay on those interfaces.
The Access Layer connects users to the network using Layer 2 and
Layer 3 switches. This layer is often called the desktop or workstation
layer because it connects user stations to the network infrastructure.
The Access Layer also connects other type of devices, including
servers, printers, and access points.
High speed
Reliability and availability
Redundancy
Fault tolerance
Manageable and scalable
No filters, packet handling, or other overhead
Limited, consistent diameter
Quality of Service (low latency)
High speed refers to ports operating close to true wire speed on the
Core Layer devices, with minimal delay. Reliability is another
important issue when defining how often equipment functions at
normal parameters. Redundancy and fault tolerance are intertwined,
and they influence the recovery time when a network device or service
stops functioning. Sometimes the network is back up and running
seamlessly (transparently) because of redundancy technologies, so
when something crashes in the network, users are not impacted and
can continue working.
Note: The QoS features presented above are not specific to the
Core Layer. Instead, they are generic concepts that are applied to all
three layers and are presented in this section as a reference.
Unlike the Core Layer, the Distribution Layer includes features that
involve some kind of traffic processing, since this is where policies
are implemented in a network, for example, which user can access
what resources from outside of the network. This is also where many
of the security features are implemented in response to the growing
number of attacks from the Internet.
The Distribution Layer shares some common features with the Core
Layer, like QoS techniques and redundancy features. Except for these
commonalities, the Distribution Layer performs a unique and
completely different set of functions, such as the following:
Enterprise Campus
Enterprise Edge
Enterprise WAN
Enterprise Data Center
Enterprise Branch
Enterprise Teleworker
Service Provider Edge
The following sections will briefly describe each module, while the
rest of this book will cover all of the technologies used in each of
these building blocks.
Generally, the services mentioned above will reside on servers that are
linked to different switches for full redundancy, load balancing, and
load sharing. They might also be cross-linked with Enterprise Campus
backbone switches to achieve high availability and high reliability.
The Enterprise Edge module (see Figure 1.13 below) might consist of
particular submodules. Depending on the particular network, one or
more of these submodules might be of interest:
E-commerce
Internet and DMZ
Remote access and VPN
Enterprise WAN
The following best practices should be taken into consideration when
designing the Enterprise Edge module:
DMZ Submodule
Web servers
Application servers
Firewalls
Intrusion Detection Systems (IDSs)
Intrusion Prevention Systems (IPSs)
Database servers
Internet Submodule
Firewalls
Internet routers
FTP servers
HTTP servers
SMTP servers
DNS servers
Firewalls
VPN concentrators
Dial-in access concentrators
IDSs/IPSs
WAN Submodule
MPLS
Metro Ethernet
Leased lines
SONET and SDH
PPP
Frame Relay
ATM
Cable
DSL
Wireless
Enterprise Branch
o Site-to-site VPNs
o Enterprise Data Center
High-speed LAN
Data Center management
Enterprise Teleworker
Remote access VPNs
Chapter 1 – Summary
Functionality
Scalability
Availability
Performance
Manageability
Efficiency
Network applications
Network services
Business goals
Constraints imposed by the customer
Technical goals
Constraints imposed by technical limitations
Frame Relay
ATM
Point-to-point leased line
SONET and SDH
Cable modem
Digital subscriber line (DSL)
Wireless bridging
Enterprise locations are supported via the following previously
described modules:
Enterprise Branch
Enterprise Data Center
Enterprise Teleworker
a. Campus
b. Distribution
c. Access
d. Internet
e. Core
f. WAN
a. Access
b. Distribution
c. Core
b. False
a. Distribution Layer
b. Campus Layer
c. Core Layer
d. Access Layer
a. Physical
b. Data-link
c. Network
d. Transport
e. Session
f. Presentation
g. Application
a. Performing QoS
7. SONA represents a:
a. Security feature
b. Architectural framework
c. Testing procedure
a. Plan
b. Design
c. Operate
d. Implement
a. Budget constraints
b. Security policies
c. Employee feedback
d. Network manageability
e. Business needs
a. Number of users
b. Budget
c. Network size
d. Network bandwidth
a. NetFlow
b. CiscoWorks
c. Wireshark
d. CDP
a. CiscoWorks
b. Cisco Discovery
c. NetFlow
d. SDM
c. SONA
a. Distribution Layer
b. E-commerce
c. WAN
d. Internet connectivity
e. Remote access
a. Data center
b. Network access
c. DMZ
d. Network management
19. What is the highest level of Internet
connectivity?
a. ATM
b. CEF
c. Frame Relay
d. EIGRP
Chapter 1 –
Answers
1 – b, e
2–d
3–b
4–c
5–b
6–c
7–b
8–a
9 – a, c, d
10 – e
11 – b
12 – b
13 – c
14 – a
15 – d
16 – b
17 – a
18 – c
19 – b
20 – a, c
This chapter will begin by analyzing the design of the Enterprise Campus
network infrastructure in terms of Layer 2 and Layer 3 best practices and it will
continue with details on network virtualization, infrastructure services (with
emphasis on Quality of Service), and Cisco IOS management capabilities.
Layer 2 Campus Infrastructure Best Practices
The Core Layer is the aggregation point for all of the other layers and
modules in the Enterprise Campus architecture. The main technology
features required inside the Core Layer include a high level of
redundancy, high-speed switching, and high reliability.
When this feature is activated, the standby Route Processor (RP) will
take control of the router after a hardware or software fault on the
active RP. Cisco NSF will continue to forward packets until route
convergence is complete, and SSO allows the standby RP to take
immediate control and maintain connectivity protocols (e.g., BGP
sessions).
EIGRP
OSPF
IS-IS
BGP
Workstation misconfiguration
Malicious users
Wiring misconfiguration
EtherChannel Recommendations
EtherChannels are important when using STP because when all of the
physical links look like one logical link, STP will not consider them a
possible loop threat and will not shut down any link from that specific
bundle. Therefore, the links in an EtherChannel can be dynamically
load balanced, without interference from STP. Cisco switches support
two implementation options (protocols) when using EtherChannels:
Managing Oversubscription
When the bandwidth for the Distribution Layer to Core Layer link
starts to grow, oversubscription at the Access Layer must be
controlled and key design decisions must be made. To accommodate
for greater needs at the Access Layer for the end-stations, the solution
would seem to be to increase the number of uplinks between the
Distribution Layer and the Core Layer, but the problem with this is
that it adds extra peer relationships, which will lead to extra
management overhead.
CEF operates on all of the multilayer switches in the figure above and
it has a deterministic behavior. As packets initiate from the left Access
Layer switch and traverse the network toward the right Access Layer
switch, they all use the same input value for the CEF hash and this
implies using the same path. This has the negative effect of not
utilizing some of the redundant links in the topology, which is known
as CEF polarization.
Routing Protocols
Three methods can be used to quickly re-route around failed links and
failed multilayer switches to provide load balancing with redundant
paths:
HSRP and Cisco-enhanced VRRP are the most common and robust
options. VRRP is usually used in a multi-vendor environment, when
interoperability with other vendor devices is needed. HSRP and GLBP
can help achieve a convergence of less than 800 ms, and the
parameters can be tuned to achieve fast convergence if there is a node
or link failure at the Layer 2 to Layer 3 boundary.
Analyzing Figure 2.10 below, there are two gateway routers that
connect to one Layer 2 switch that aggregates the network hosts:
Figure 2.10 – Hot Standby Router Protocol
As is the case for HSRP, the VRRP group presents a virtual IP address
to its clients. An interesting aspect about VRRP is that it can utilize, as
the virtual IP address, the same address that is on the master device.
For example, in the figure above, the virtual address is configured as
10.10.10.1, identical to the address on the Router 1 interface.
Gateway Load Balancing Protocol
GLBP is the most unique of the FHRPs, as it not only has the ability
to achieve redundancy but also has the ability to accomplish load
sharing. In addition, it is much easier to use more than two devices.
Figure 2.12 below illustrates a GLBP configuration:
When the hosts request ARPs for the 10.10.10.4 MAC address, the
AVG responds to the ARP requests and performs a round robin with
the virtual MAC addresses of the AVF machines. Router 1 responds to
the first ARP it receives with its own virtual MAC address, then it
responds to the second ARP it receives with Router 2’s virtual MAC
address and then with Router 3’s virtual MAC address. The AVG can
then round robin the traffic over the available AVF devices. The round
robin simplistic balancing approach can be changed within the
configuration with other load balancing techniques for GLBP.
Note: The AVG can also function as an AVF and it usually does so.
The next design involves using Layer 3 at the Access and Distribution
Layers in a routed network design model. This allows for faster
convergence and easier implementation because only Layer 3 links are
used. A routing protocol is used across the topology, such as EIGRP
or OSPF. The most important advantages of a full Layer 3
environment are as follows:
Simplified implementation
Faster convergence
Equal cost load balancing on all of the links
Eliminates the need for STP
FHRP configurations are not necessary
The convergence time to reroute around a failed link is around
200 ms, unlike the 700 to 900 ms in previous scenarios (Layer 2
to Layer 3 boundaries)
However, the one downside to this design is that it does not support
VLANs spanning the Distribution Layer switches; however, overall, it
is a best practice to keep the VLANs in isolated blocks at the Access
Layer. Since both EIGRP and OSPF load share across equal cost
paths, this scenario offers a convergence benefit similar to using
GLBP. Some of the key actions to perform when using a full Layer 3
topology with EIGRP to the edge include the following:
Use EIGRP to the edge (to the Access Layer submodule, the
WAN submodule, the Internet submodule, and to other
submodules).
Summarize at the Distribution Layer to the Core Layer, like in a
traditional Layer 2 to Layer 3 design.
Configure all the edge switches as EIGRP stub nodes to avoid
queries sent in that direction.
Control route propagation to the edge switches using
distribution lists.
Set the appropriate hello and dead timers to protect against
failures where the physical links are active but the route
processing has terminated.
Some of the key actions to perform when using OSPF to the edge
include the following:
Servers
Networks
Storage (Virtual SANs, Unified I/O)
Applications
Desktop (Virtual Desktop Infrastructure)
Virtual machines
Virtual switches
Virtual Local Area Networks
Virtual Private Networks
Virtual Storage Area Networks
Virtual switching systems
Virtual Routing and Forwarding
Virtual port channels
Virtual device contexts
Device contexts allow partitioning a single partition into multiple
virtual devices called contexts. A context acts as an independent
device with its own set of policies. The majority of features
implemented on the real device are also functional on the virtual
context. Some of the devices in the Cisco portfolio that support virtual
contexts include the following:
The original 802.11 standard was defined in 1997 by the IEEE, and it
uses two different types of RF technologies operating in the 2.4 GHz
range:
Frequency Hopping Spread Spectrum (FHSS), which operates
only at 1 or 2 Mbps
Direct Sequence Spread Spectrum (DSSS), also operating at 1 or
2 Mbps
Collision Avoidance
Wireless Association
WLAN Topologies
Bridges
Repeaters
Mesh topologies
The WLAN mesh topology (see Figure 2.16 above) is the most
sophisticated and most used wireless topology. When used in this type
of topology, the AP can function as a repeater or as a bridge, as
needed, based on the RFs. This technology allows designers to use
wireless technologies to cover large geographical areas and ensures
features such as:
Fault tolerance
Load distribution
Transparent roaming
Supplicant (client)
Authenticator (access point or switch)
Authentication server (Cisco ACS)
Note: The most commonly used EAP solutions are PEAP and EAP-
FAST for small business networks and EAP-TLS for large Enterprise
Network solutions.
SSID management
VLAN management
Access point association management
Authentication
Wireless QoS
Note: The APs and the WLCs exchange control messages over the
wired backbone network.
Layer 3 LWAPP tunnels are used between APs and WLCs to transmit
control messages. They use UDP port 12223 for control and UDP port
12222 for data messages. Cisco LWAPPs can operate in six different
modes:
Local mode
Remote Edge Access Point (REAP) mode
Monitor mode
Rogue Detector (RD) mode
Sniffer mode
Bridge mode
In RD mode, the LWAPP monitors for rogue APs. The RD’s goal is to
see all of the VLANs in the network because rogue APs can be
connected to any of these VLANs. The switch sends all of the rogue
APs’ client MAC address lists to the RD access point, which forwards
these to the WLC to compare them with the MAC addresses of
legitimate clients. If MAC addresses are matched, the controller
knows that the rogue AP that deals with those clients is on the wired
network.
Sniffer mode allows the LWAPP to capture and forward all of the
packets on a particular channel to a remote machine that is running a
packet capturing and analysis software. These packets include
timestamps, packet size, and signal strength information.
WLANs
Interfaces
Ports
Intra-controller roaming
Inter-controller roaming (Layer 2 or Layer 3)
Shaping
Policing
Congestion management
Congestion avoidance
Link efficiency mechanisms
The hardware queue on the interface always uses the FIFO method for
packet treatment. This mode of operation ensures that the first packet
in the hardware queue is the first packet that will leave the interface.
The only TX Ring parameter that can be modified on most Cisco
devices is the queue length.
WFQ is not the best solution in every scenario because it does not
provide enough control in the configuration (it does everything
automatically), but it is far better than the FIFO approach because
interactive traffic flows that generally use small packets (e.g., VoIP)
get prioritized to the front of the software queue. This ensures that
high-volume talkers do not use all of the interface bandwidth. The
WFQ fairness aspect also makes sure that high-priority interactive
conversations do not get starved by high-volume traffic flows.
Low
Normal
Medium
High
Congestion Avoidance
Shaping and policing are not the same technique, but many people
think they are. Shaping is the process that controls the way traffic is
sent (i.e., it buffers excess packets). Policing, on the other hand, will
drop or re-mark (penalize) packets that exceed a given rate. Policing
might be used to prevent certain applications from using all of the
connection resources in a fast WAN or from offering only as many
resources as certain applications with clear bandwidth requirements
need.
AutoQoS is a Cisco IOS feature that uses a very simple CLI to enable
Quality of Service for VoIP in WAN and LAN environments. This is
a great feature to use on Cisco Integrated Services Routers (ISRs)
(routers that integrate data and media collaboration features) because
it provides many capabilities to control the transport VoIP protocols.
AutoQoS inspects the device capabilities and automatically enables
LFI and RTP where necessary and it is usually used in small- to
medium-sized businesses that need to deploy IP Telephony fast but do
not have experienced staff that can plan and deploy complex QoS
features. Large companies can also deploy IPT using AutoQoS, but
the auto-generated configuration should be carefully revised, tested,
and tuned to meet the organization’s needs.
Network Management
SNMP has evolved during the years and has now reached version 3
(SNMPv3). Network designers should demand that every environment
uses SNMPv3, and not the older, unsecured SNMP versions (1 and 2),
because of the advanced security features it presents. SNMP is used
by network administrators and engineers to perform the following
tasks:
Monitoring the performance of network devices
Troubleshooting
Planning scalable enterprise solutions and intelligent services
The SNMP agent is used to send and receive information from the
device to the Network Management Station (NMS) and the other way
around. To do that, different types of SNMP messages are used. The
NMS will run some kind of network management software (e.g.,
CiscoWorks) that retrieves and displays the SNMP information in a
Graphical User Interface (GUI) format. The displayed information is
used for controlling, troubleshooting, and planning.
The managed device contains SNMP agents and an MIB that stores all
the information. Different types of messages are used to get
information from the NMS to and from the managed device (or the
monitored device), as shown in Figure 2.26 below:
Figure 2.26 – SNMP Messages
The first message is called the Get Request. This is sent to the
managed device when the NMS wants to get a specific MIB variable
from the SNMP agent that runs on that device. The Get Next Request
information is used to return the next object in the list after the Get
Request message returned a value. The Get Bulk message works only
in SNMPv3 environments and it can be used to retrieve a big chunk of
data (e.g., an entire table); it also reduces the need to use many Get
Request and Get Next Request messages. This reduces the overhead
on bandwidth utilization on the link.
The Set Request message is also sent by the NMS and is used to set an
MIB variable on the agent. The Get Response message is the response
from the SNMP agent to the NMS Get Request, Get Next Request, or
Get Bulk messages. A Trap is used by the SNMP agent to transmit
unsolicited alarms to the NMS when certain conditions occur (e.g.,
device failure, state change, or parameter modifications). Different
thresholds can be configured on the managed device for different
parameters (e.g., disk space, CPU utilization, memory utilization, or
bandwidth utilization) and Trap messages are sent when the defined
thresholds are reached.
SNMPv3 introduced another message called the Inform Request. This
is similar to a Trap message and is what a managed device will send to
the NMS as an acknowledgement to other messages. The major
difference between these two message types is that Inform Request
messages are acknowledged by the receiver, unlike the Trap messages,
which are not.
NetFlow
As shown in Figure 2.27 above, the NetFlow data export service is the
first layer of the three-tier NetFlow architecture. This is where the data
warehousing and data mining solutions occur and where accounting
statistics for traffic on the networking devices are captured, and it uses
UDP to export data in a three-part process:
Data switching
Data export
Data aggregation
The data is exported to the second tier, the NetFlow flow collector
service, where servers and workstations accomplish data collection,
data filtering, aggregation, data storage, and file system management
using existing or third-party file systems. Network data analysis takes
place at the lowest tier, which is at the Access Layer. At this level,
network planning tools, overall network analysis tools, and accounting
and billing tools can be used and data can be exported to various
database systems or Excel spreadsheets.
NBAR can work with Cisco AutoQoS, which is a feature in the Cisco
IOS that generates pre-defined policy maps for voice. This helps to
simplify the deployment and provisioning of QoS by leveraging the
existing NBAR traffic classification, which can be done with NBAR
discovery. AutoQoS works by creating a trust zone that allows
Differentiated Services Code Point (DSCP) markings to be used for
classification. If the optional “trust” keyword is used, the existing
DSCP marking can be used, but if the keyword is not specified,
NBAR will be used to mark the DSCP values.
Another feature that can be used with NBAR is QoS for the Enterprise
Network, which uses NBAR discovery to collect traffic statistics.
Based on that discovery, NBAR will generate policy maps (as
opposed to AutoQoS, which has predefined policy maps) and make its
bandwidth settings via Cisco suggestions on a class-by-class basis.
This is an appropriate solution for medium-sized companies and
Enterprise organizations’ branch offices, and it is based on the Cisco
best practice recommendations that come from the NBAR discovery
process. This is a two-phase process: the first phase involves an auto
discovery QoS command on the appropriate link, and the second
phase involves automatically configuring the link with the “auto qos”
command.
The most important parameters measured with the IP SLA feature are
delay and jitter. Delay represents the amount of time required for a
packet to reach its destination and jitter is the variation in delay. These
parameters are of great importance, especially in highly congested
WAN environments. Another important key design parameter is the
overall available bandwidth (i.e., throughput). This measures the
amount of data that can be sent in a particular timeframe through a
specific WAN area.
Chapter 2 – Summary
PortFast
UplinkFast
BackboneFast
Loop Guard
Root Guard
BPDU Guard
UDLD
EtherChannels are important when using STP because when all of the
links look like one link, STP will not consider them a possible loop
threat and will not shut down any link from that specific bundle. The
links in an EtherChannel can be dynamically load balanced without
STP interfering in this situation.
Wireless clients
Access points
Network management
Network unification
Network services
Ports
Interfaces
WLANs
Wireless networks offer users mobility, where the users can physically
move throughout a campus. As the users move, their wireless clients
update their AP association to the most appropriate AP, based on
location. With Layer 2 roaming, the WLCs with which the APs
associate are in the same subnet. However, with Layer 3 roaming, the
APs associate with WLCs on different subnets.
When designing a wireless network, one of the first steps in the design
process is to conduct a Radio Frequency (RF) site survey, which will
provide the network designer with a better understanding of an
environment’s RF characteristics (e.g., coverage areas and RF
interference). Based on the results of the RF site survey, the network
designer can strategically position the wireless infrastructure devices.
a. Extended distribution
b. OSI model
e. Access Layer
a. Limited scalability
d. No convergence possibility
a. True
b. False
b. Cisco NetFlow
d. CEF polarization
a. PortFast
b. UplinkFast
c. Loop Guard
d. BPDU Guard
e. Root Guard
a. PortFast
b. UplinkFast
c. Loop Guard
d. BPDU Guard
e. Root Guard
a. PagP
b. LLDP
c. LACP
d. STP
a. LACP
b. Trunking
c. STP
d. CEF polarization
a. HSRP
b. VRRP
c. GLBP
a. HSRP
b. VRRP
c. GLBP
a. Preemption
b. IP SLA
c. Prevention
d. CEF polarization
a. True
b. False
a. 1 GHz
b. 2 GHz
c. 2.4 GHz
d. 3.6 GHz
e. 5 GHz
a. WEP
b. WPA
c. WPA2
d. Extended WPA
a. EAP-TLS
b. LEAP
c. EAP-FAST
d. DEAP
e. EAP-DDS
f. PEAP
a. IP SLA
b. CDP
c. OSPF
d. LACP
a. FIFO
b. PQ
c. CQ
d. CBWFQ
a. WFQ
b. LLQ
c. PQ
d. CQ
a. True
b. False
Chapter 2 – Answers
1–d
2 – b, c, e
3–b
4–a
5–e
6–d
7–b
8 – a, c
9–d
10 – b
11 – c
12 – a
13 – a
14 – c
15 – c
16 – a, b, c, f
17 – a
18 – a
19 – c
20 – b
Chapter 3: Designing
Advanced IP Addressing
This chapter covers the following topics:
Importance of IP Addressing
Route summarization
A more scalable network
A more stable network
Faster convergence
Summarization
The goal is to take all of these networks and aggregate them into one
single address that can be stored at the edge distribution submodule or
at the Core Layer of the network. The first thing to understand when
implementing a hierarchical addressing structure is the use of
continuous blocks of IP addresses. In this example, the addresses
192.100.168.0 through 192.100.175.0 are used:
The internal private addressing will use the popular 10.0.0.0/8 range.
Within the organization’s domain, two separate building
infrastructures (on the same campus or in remote buildings) will be
aggregated using the 10.128.0.0/16 and 10.129.0.0/16 ranges.
210.22.10.0/24
210.22.9.0/24
210.22.8.0/24
The formula for calculating the number of subnets is 2s, where “s” is
the number of borrowed subnet bits. In Figure 3.3 above, the network
expanded from a /16 network to a /24 network by borrowing 8 bits.
This means 28 = 256 subnets can be created with this scheme.
The formula for calculating the number of hosts that can exist in a
particular subnet is 2h-2, where “h” is the number of host bits. Two
hosts are subtracted from the 2h formula because the all-zeros host
portion of the address represents the major network itself and the all-
ones host portion of the address represents the broadcast address for
the specific segment, as illustrated below:
172.16.3.32/27
172.16.3.64/27
The /27 subnets are suitable for smaller networks and can
accommodate the number of machines in those areas. The number of
hosts that can be accommodated is 25-2=30.
A subnet might be needed for the point-to-point link that will connect
two network areas, and this can be accomplished by further subnetting
one of the available subnets in the /27 scheme, for example
172.16.3.96/27. This can be subnetted with a /30 to obtain
172.16.3.100/30, which offers just two host addresses: 172.16.3.101
and 172.16.3.102. This scheme perfectly suits the needs for the point-
to-point connections (one address for each end of the link). By
performing VLSM calculations, subnets that can accommodate just
the right number of hosts in a particular area can be obtained.
All of the other addresses are public addresses that are allocated to
ISPs or other point of presence nodes on the Internet. ISPs can then
assign Class A, B, or C addresses to customers to use on devices that
are exposed to the Internet, such as:
Web servers
DNS servers
FTP servers
Other servers that run public-accessible services
No Internet connectivity
Only one public address (or a few) for users to access the Web
Web access for users and public-accessible servers
Every end-system has a public IP address
The most highly unlikely scenario would be the one in which every
end-system is publicly accessible from the global Internet. This is a
dangerous situation because the entire network is exposed to Internet
access and this implies high security risks. To mitigate these risks,
strong firewall protection policies must be implemented in every
location. In addition to the security issues, this scenario is also not
very effective because many IP addresses are wasted and this is very
expensive. All of these factors make this scenario one not to be used
in modern networks.
The two most common solutions from the scenarios presented above
are as follows:
First, in the figure above, assume that there is some kind of Internet
presence in the organization that offers services either to internal users
in the Access Layer submodule or to different public-accessible
servers (e.g., Web, FTP, or others) in the Enterprise Edge module.
Regardless of what modules receive Internet access, NAT is run in the
edge distribution submodule to translate between the internal
addressing structure used in the Enterprise Campus and the external
public IP addressing structure. NAT mechanisms can also be used in
the Enterprise Edge module.
Access Layer
Distribution Layer
Core Layer
Server farm
Address Planning
Corporate servers
Network management workstations
Standalone servers in the Access Layer submodule
Printers and other peripheral devices in the Access Layer
submodule
Public-accessible servers in the Enterprise Edge module
Remote Access Layer submodule devices
WAN submodule devices
Role-Based Addressing
X = closet numbers
Y = VLAN numbers
Z = host numbers
Network designers might not always have the luxury of using the
summarizable blocks around simple octet boundaries and sometimes
this is not even necessary, especially when some bit splitting
techniques would better accommodate the organization and the role-
based addressing scheme. This usually involves some binary math,
such as the example below:
172.16.aaaassss.sshhhhhh
The first octet is 172 and the second octet is 16. The “a” bits in the
third octet identify the area and the “s” bits identify the network
subnet or VLAN. Six bits are reserved for the hosts in the forth octet.
This offers 62 hosts per VLAN or subnet, or 216-2 (two host addresses
will be reserved for the network address – all zeros in the last bits and
the broadcast address and all ones in the last bits).
This logical scheme will result in the following address ranges, based
on the network areas:
Subnet calculations should be made to ensure that the right type of bit
splitting is used to represent the subnet and VLANs. Remember that a
good summarization technique is to take the last subnet in every area
and divide it so that the /30 subnet can be used for any WAN or point-
to-point links. This will maximize the address space so for each WAN
link there will be only two addresses with a /30 or .252 subnet mask.
Although the goal with IPv6 is to avoid the need for NAT, NAT for
IPv4 will still be used for a while. NAT is one of the mechanisms used
in the transition from IPv4 to IPv6, so it will not disappear any time
soon. In addition, it is a very functional tool for working with IPv4
addressing. NAT and PAT (or NAT Overload) are usually carried out
on ASA devices, which have powerful tools to accomplish these tasks
in many forms:
Static NAT
Dynamic NAT
Identity NAT
Policy NAT
If there are internal servers or servers in the DMZ that are reached
using translated addresses, it is a good practice to isolate these servers
into their own address space and VLAN, possibly using private
VLANs. NAT is often used to support content load balancing servers,
which usually must be isolated by implementing address translation.
Address Representation
The Version field, as in the IPv4 header, offers information about the
IP protocol version. The Traffic Class field is used to tag the packet
with the class of traffic it uses in its DiffServ mechanisms. IPv6 also
adds a Flow Label field, which can be used for QoS mechanisms, by
tagging a flow. This can be used for multilayer switching techniques
and will offer faster packet switching on the network devices. The
Payload Length field is the same as the Total Length field in IPv4.
The Next Header is an important IPv6 field. The value of this field
determines the type of information that follows the basic IPv6 header.
It can be a Transport Layer packet like TCP or UDP or it can
designate an extension header. The Next Header field is the equivalent
of the Protocol field in IPv4. The next field is Hop Limit, which
designates the maximum number of hops an IP packet can traverse.
Each hop/router decrements this field by one, so this is similar to the
TTL field in IPv4. There is no Checksum field in the IPv6 header, so
the router can decrement the Hop Limit field without recalculating the
checksum. Finally, there is the 128-bit source address and the 128-bit
destination address.
Routing header
Fragmentation header
Authentication header
IPsec ESP header
Hop-by-Hop Options header
2001:43aa:0000:0000:11b4:0031:0000:c110.
2001:43aa::11b4:0031:0000:c110
2001:43aa::11b4:0031:0:c110
2001:43aa::11b4:31:0:c110
Note: The double colon (::) notation can appear only one time in an
IPv6 address.
Based on the IPv6 global unicast address format shown in Figure 3.8
above, the first 23 bits represent the registry, the first 32 bits represent
the ISP prefix, the first 48 bits are the site prefix, and /64 represents
the subnet prefix. The remaining bits are allocated to the interface ID.
The global unicast address and the anycast address share the same
format. The unicast address space actually allocates the anycast
address. To devices that are not configured for anycast, these
addresses will appear as unicast addresses.
IPv6 Mechanisms
As with IPv4, there are different mechanisms available for IPv6 and
the most important of these includes the following:
ICMPv6
IPv6 Neighbor Discovery (ND)
Name resolution
Path Maximum Transmission Unit (MTU) Discovery
DHCPv6
IPv6 security
IPv6 routing protocols
Router Solicitation
Router Advertisement
Neighbor Solicitation
Neighbor Advertisement
Redirect
IPv6 also has some security mechanisms. Unlike IPv4, IPv6 natively
supports IPsec (an open security framework) with two mechanisms:
the Authentication Header (AH) and the Encapsulating Security
Payload (ESP).
The designers of the IPv6 protocol suite have suggested that IPv4 will
not go away anytime soon, and it will strongly coexist with IPv6 in
combined addressing schemes. The key to all IPv4 to IPv6 transition
mechanisms is dual-stack functionality, which allows a device to
operate both in IPv4 mode and in IPv6 mode.
Static tunnels:
o Generic Routing Encapsulation (GRE) – default tunnel
mode
o IPv6IP (less overhead, no CLNS transport)
o Automatic tunnels:
6to4 (embeds IPv4 address into IPv6 prefix to
provide automatic tunnel endpoint determination);
automatically generates tunnels based on the
utilized addressing scheme
Intra-Site Automatic Tunnel Addressing Protocol
(ISATAP) – automatic host-to-router and host-to-
host tunneling
Figure 3.9 – IPv6 over IPv4 Tunneling
Analyzing Figure 3.9 above, the IPv4 island contains two dual-stack
routers that run both the IPv4 and the IPv6 protocol stacks. These two
routers will be able to support the transition mechanisms by tunneling
IPv6 inside IPv4, and the two routers each connect to an IPv6 island.
To carry IPv6 traffic between the two edge islands, a tunnel is created
between the two routers that encapsulate IPv6 packets inside IPv4
packets. These packets are sent through the IPv4 cloud as regular IPv4
packets and they get de-encapsulated when they reach the other end.
An IPv6 packet generated in the left-side network reaches a
destination in the right-side network, so it is very easy to tunnel IPv6
inside IPv4 because of the dual-stack routers at the edge of the IPv4
infrastructure. Static tunneling methods are generally used when
dealing with point-to-point links, while dynamic tunneling methods
work best when using point-to-multipoint connections.
Note: ISATAP is a protocol that will soon fade away because almost
all modern hosts and routers have native IPv6 support.
Chapter 3 – Summary
Unlike IPv4, IPv6 does not use broadcasts. Instead, IPv6 uses the
following methods for sending traffic from a source to one or more
destinations:
Unicast (one-to-one): Unicast support in IPv6 allows a single
source to send traffic to a single destination, just as unicast
functions in IPv4.
Anycast (one-to-nearest): A group of interfaces belonging to
nodes with similar characteristics (e.g., interfaces in replicated
FTP servers) can be assigned an anycast address. When a host
wants to reach one of those nodes, the host can send traffic to the
anycast address and the node belonging to the anycast group that
is closest to the sender will respond.
Multicast (one-to-many): Like IPv4, IPv6 supports multicast
addressing, where multiple nodes can join a multicast group. The
sender sends traffic to the multicast IP address and all members
of the multicast group receive the traffic.
IPv6 allows the use of static routing and supports specific dynamic
routing protocols that are variations of the IPv4 routing protocols
modified or redesigned to support IPv6:
RIPng
OSPFv3
EIGRPv6
IS-IS
BGP
Chapter 3 – End of Chapter Quiz
1. Which of the following is NOT an advantage of using address summarization?
2. What is the name of the most general summary route that summarizes
everything using the “ip route 0.0.0.0 0.0.0.0 <interface>” command?
a. Null route
b. Aggregate route
c. Binary route
d. Default route
e. Ultimate route
3. The process of address summarization has the benefit of reducing the size of
the ACLs.
a. True
b. False
a. True
b. False
8. What are the most important benefits of using VLSM (choose all that apply)?
d. Increased security
9. What does the subnet broadcast address represent in terms of the binary form
of an address?
a. True
b. False
a. 32 bits
b. 64 bits
c. 128 bits
d. 256 bits
12. With IPv6, every host in the world can have a unique address.
a. True
b. False
13. Which of the following are major differences between IPv4 and IPv6 (choose
two)?
a. True
b. False
15. Which of the following are advantages of using route aggregation (choose all
that apply)?
a. Lowered overhead
c. Increased bandwidth
a. True
b. False
17. Which IPv6 header field is similar to the ToS field used in IPv4?
a. Traffic Group
b. Traffic Class
c. Class Map
d. Traffic Type
a. True
b. False
19. How many times can the double colon (::) notation appear in an IPv6 address?
a. One time
b. Two times
c. Three times
c. Anycast
d. All-nodes multicast
Chapter 3 – Answers
1–c
2–d
3–a
4–b
5 – a, c
6–b
7–b
8 – a, c, e
9–d
10 – a
11 – c
12 – a
13 – a, c
14 – b
15 – a, b, d
16 – a
17 – b
18 – a
19 – a
20 – d
Chapter 4: Designing Advanced IP
Multicast
This chapter covers the following topics:
Corporate meetings
Video conferencing
E-learning solutions
Webcasting information
Distributing applications
Streaming news feeds
Streaming stock quotes
IP Multicast Functionality
In multicasting, a source application sends multicast traffic to a group
destination address. The hosts interested in receiving this traffic join
the specific group address by signaling their upstream devices. The
routers then build a tree from the senders to the receivers and portions
of the network that do not have receivers do not receive this
potentially bandwidth-intense traffic. Examples of multicast
applications include:
IPTV
Videoconferencing applications
Data center replication
Stock tickers
Routing protocols
IGMP is used for receiver devices to signal routers on the LAN that
they want traffic for a specific group. IGMP comes in the following
versions:
From a design standpoint, a network designer must understand three major PIM
deployment methods that are used in modern networks: PIM-SM, PIM-SSM,
and Bidirectional PIM. PIM is a router-to-router control protocol that builds a
loop-free tree from the sender to the receivers. It is called “protocol
independent” because it relies on the underlying unicast routing protocol used
and it does not care what that routing protocol is. PIM will rely on whatever
IGMP is used (e.g., RIP, EIGRP, OSPF, IS-IS, etc.) and will base its operation
on the information received from the IGMP.
PIM comes in two versions (PIMv1 and PIMv2) and it comes in two important
modes, Dense Mode and Sparse Mode. Dense Mode, which is now becoming a
legacy mode, uses an implicit join approach and sends the multicast traffic
everywhere, unless a specific host says it does not want it. This is also known as
“flood and prune” behavior. PIM-DM is suitable for multicast environments
with a dense distribution of receivers.
PIM-SM is also known as Any Source Multicast (ASM) and is described in RFC
4601. It uses a combination of Shared Trees, Source-based Trees, and
Rendezvous Points (RPs) and is used by the majority of modern multicast
deployments.
Figure 4.6 – PIM Sparse Mode
The control plane helps build the multicast tree from the senders to the
receivers, and traffic can then start to flow over the multicast data
plane. When routers receive multicast packets, they perform two
actions:
Static RP addressing
Anycast RP
Auto-RP
Bootstrap Router (BSR)
The solution to this issue is statically assigning an RP for just the two
Auto-RP groups, 224.0.1.39 and 224.0.1.40. This defeats the purpose
of automatic assignment, so Cisco invented PIM Sparse-Dense Mode
as a solution to this problem. Cisco Sparse-Dense Mode is not a
perfect solution because if there is an RP failure, Dense Mode
flooding of the other multicast traffic that is involved with these
groups will occur.
Ethernet Multicasting
In Ethernet environments, the Layer 2 multicast process is seamlessly
supported. A 48-bit multicast address is utilized and the stations listen
for the address at Layer 2. These addresses are used for various
purposes beyond traditional multicast implementations, for example,
Cisco Discovery Protocol (CDP) and Layer 2 protocol tunneling
techniques.
IGMP snooping
CGMP
Many of the multicast security features are related to ensuring that the
network resources, as well as the access control mechanisms for the
different senders and receivers, are managed properly. Regarding the
types of multicast traffic that are allowed on the network, a well-
defined solution should be in place because there are many different
types of multicast applications, and some of these may not be allowed
in the organization. An important consideration regarding IP multicast
security is the potential for various types of attack traffic (e.g., from
rogue sources to multicast hosts and to networks that do not have
receivers, and from rogue receivers).
With unicast traffic, any node can send a packet to another node on
the unicast network, and there is no implicit protection of the receivers
from the source, which is the reason firewalls and ACLs are used.
However, in multicast routing, the source devices send traffic to a
multicast group, not to specific devices, and a receiver on a branch in
the network must explicitly join the multicast group before traffic is
forwarded to that branch. In other words, multicast has built-in
protection mechanisms to implicitly protect receivers against packets
from unknown sources or potential attackers, such as Reverse Path
Forwarding.
With SSM, unknown attacks will not happen because the receivers
must join a specific host in a specific multicast group. Any unwanted
traffic will reach only the first-hop router closest to the source and
then it will be discarded. The attack traffic will not even generate state
information on the first-hop router.
Unicast has two scoped addresses (public and private spaces) that
provide a layer of security by hiding internal hosts using NAT and
PAT technologies. There can also be overlapping scoped addresses
between different companies. On the other hand, multicast has its own
scoped addresses as defined by IANA. IPv4 multicast supports the
ability to administratively scope definitions within the 239.0.0.0/8
range. One of the most common actions is to configure a router with
ACLs that can allow or prevent multicast traffic in an address range
from flowing outside of an autonomous system or any user-defined
domain.
The unicast state is derived from the IP routing table and this can be
populated differently based on the routing protocol used (e.g., EIGRP
and OSPF use topology/link-state databases that ultimately lead to the
routing table). Most routers run Cisco Express Forwarding (CEF), so
the routing table is a representation in real time of the Forwarding
Information Base (FIB) of CEF. The state changes only when there is
a change in the topology, and this is the only factor that affects the
CPU on the router because of CEF and the FIB. In other words, end-
user functions will not impact the state of the router or the activity on
the router, other than the traffic that moves through the links. One of
the benefits of the FIB is that this activity is accomplished very
preemptively and without having to use the route processor.
However, with multicast, the state also includes additional sources and
receivers for new applications that are being used in the multicast
design. Application state change and user activity can affect the CPU
because both of these aspects can affect multicast traffic and state
information. In a multicast environment, the multicast sources and the
applications are additional network design constraints that go beyond
routing topology changes. This is a key consideration when designing
multicast.
Figure 4.13 – Multicast Router Traffic Flow
IPv6 Multicast
The IPv6 multicast address space covers the FF00::/8 range, and more
specific addresses include the following:
PIMv2 for IPv6 supports only PIM-SM behavior, not Dense Mode or
Sparse-Dense Mode behavior (Bidirectional PIM is supported only for
static configuration). Just as in IPv4, the IPv6 unicast Routing
Information Base (RIB) is used for the RPF checks. When configuring
PIMv2, it uses tunnels that are dynamically created and those tunnels
are used as an efficient way to carry out the multicast source
registration process.
Chapter 4 – Summary
Multicasting implies taking a single data packet and sending it to a
group of destinations simultaneously. This behavior is a many-to-
many transmission, as opposed to other types of communication such
as unicast, broadcast, or anycast.
The PIM operation mode used most often is Sparse Mode, which
utilizes an explicit join-type behavior, so the receiver does not get the
multicast traffic unless it asks for it. This can be considered a “pull”
mechanism, as opposed to the “push” mechanisms used in PIM-DM.
The “pull” mechanism allows PIM-SM to forward multicast traffic
only to the network segments with active receivers that have actually
requested the data. PIM-SM distributes the data about active sources
by forwarding data packets on Shared Trees.
Static RP addressing
Anycast RP
Auto-RP
Bootstrap Router (BSR) – the most preferred method
IGMP is a protocol that operates at Layer 3, so the Layer 2 switches are not
aware of the hosts that want to join the multicast groups. By default, Layer 2
switches flood the received multicast frames to all of the ports in the VLAN,
even if only one device on one port needs the specific information. To improve
the switches’ behavior when they receive multicast frames, technologies that
allow for effective implementation of multicast at Layer 2 can utilize the
following:
IGMP snooping
Cisco Group Management Protocol (CGMP)
CGMP is a Cisco proprietary protocol that runs between the multicast
router and the switch. It works as a client-server model, where the
router is a CGMP server and the switch is a CGMP client. CGMP
allows switches to communicate with multicast-enabled routers to
figure out whether any users attached to the switches are part of any
particular multicasting groups and whether they qualify for receiving
the specific stream of data.
State information
Replication process
The join process
Unidirectional flows
Content attacks
Bandwidth attacks
Attacks against routers and switches
b. Increasing bandwidth
consumption
d. Reducing the processing on receiving hosts, especially if they are not interested in the transmission
a. TCP
b. ICMP
c. UDP
d. RIP
e. CDP
3. Because of its underlying Transport Layer protocol, multicast has a connectionless behavior and offers best
effort packet transport.
a. True
b. False
a. PIM
b. ICMP
c. CDP
d. IGMP
e. RIP
6. Which of the following protocols is used for receiver devices to signal routers on the LAN that they want traffic
for a specific group?
a. PIM
b. ICMP
c. CDP
d. IGMP
e. RIP
a. True
b. False
8. Which of the following protocols is used by multicast-enabled routers to forward incoming multicast streams
to a particular switch port?
a. PIM
b. ICMP
c. CDP
d. IGMP
e. RIP
9. Which of the following are valid PIM modes of operation (choose all that apply)?
a. PIM-AM
b. PIM-DM
c. PIM-SSM
d. PIM MSDP
e. PIM-SM
10. MOSPF and DVMRP are modern multicast protocols based on the PIM framework.
a. True
b. False
11. Which of the following PIM modes of operation uses an explicit join-type behavior (or a “pull” mechanism)?
a. PIM-AM
b. PIM-DM
c. PIM-SSM
d. PIM MSDP
e. PIM-SM
12. PIM-DM was design to be used in environments with a dense distribution of receivers.
a. True
b. False
13. Which of the following describes the concept used by PIM-SM to process Join requests?
a. RP
b. AP
c. CDP
d. MLD
14. One of the biggest issues with PIM-DM implementations is the State Refresh mechanisms that flood traffic
every three minutes by default.
a. True
b. False
15. Which of the following is a mechanism that allows PIM-SM to switch to a Source Tree mode of operation if the
traffic rate goes over a certain threshold?
a. Anycast RP
b. SPT switchover
c. SSM switchover
d. Auto-RP
16. Static addressing is the most preferred method of RP configuration, especially in large multicast deployments.
a. True
b. False
a. Static addressing
b. Auto-RP
c. DHCP
d. BSR
a. True
b. False
19. Which of the following is the recommended protocol to be used in switched environments to improve the
switches’ behavior when they receive multicast frames?
a. PIM
b. CGMP
c. IGMP
d. IGMP snooping
20. Which of the following protocols replaces IGMP in IPv6 multicast environments?
a. BSR
b. ICMP
c. MLD
d. PIM
Chapter 4
– Answers
1–b
2–c
3–a
4 – a, d
5–d
6–d
7–b
8–a
9 – b, c, e
10 – b
11 – e
12 – a
13 – a
14 – a
15 – b
16 – b
17 – c
18 – a
19 – d
20 – c
Note: RIPv1 and IGRP are considered legacy protocols and some modern
network devices do not support them.
RIPv1
RIPv2
IGRP
RIPng
OSPF
IS-IS
OSPFv3
Exterior routing protocols run between ASs and the most common
example is BGPv4. The main reason to use different types of routing
protocols to carry routes outside of the AS boundaries is to exchange a
large amount of route entries. In this regard, exterior routing protocols
support special options and features that are used to implement
various policies. The routing metrics for these kinds of protocols
include more parameters than for interior routing protocols because of
the need for flexible routing policies and choosing the best possible
path.
The AD is the way a router selects a route based on one protocol over
another, but something else that must be decided is the way in which
the device will select a routing table entry over another entry from the
same protocol. Routing protocol metrics are used to make this
decision.
Different routing protocols use different metrics. RIP uses the hop
count as a metric, selecting the best route based on the lowest number
of routers it passed through. This is not very efficient because the
shortest path can have a lower bandwidth than other paths. OSPF is
more evolved and takes bandwidth into consideration, creating a
metric called cost. Cost is directly generated from the bandwidth
value, so a low bandwidth has a high cost and a high bandwidth has a
low cost.
Note: One of the reasons RIP has a high AD value is that it uses the
hop count metric, which is not very efficient in complex
environments. The more sophisticated the metric calculation is, the
lower the AD value assigned to different routing protocols.
NAT blocks
Blocks used in redistribution
Blocks for management VLANs
Blocks for content services
With any interior routing protocol (except for EIGRP), the “default-
information originate” command can be used to generate a default
route. However, with EIGRP, the “ip default-network <prefix>”
command must be used to configure the last-resort gateway or the
default route. This network must be in the route, either as a static route
or as an IGRP route, before the EIGRP router will announce the
network as a candidate default route to other EIGRP routers.
OSPF can define different types of stub areas. This method allows for
automatic route filtering between areas on the Area Border Routers
(ABRs). OSPF supports the following area types:
Normal area
Stub area
Totally stubby area
Not so stubby area (NSSA)
Totally not so stubby area (totally NSSA)
Route Filtering
Analyzing Figure 5.6 above, filters are used to prevent the OSPF
information that was redistributed into EIGRP information from being
re-advertised back into the OSPF part of the organization. This is
often referred to as “manual split horizon.” If filtering is not used and
there is an outage, routing loops or a strange convergence behavior
can appear and this leads to instability in the network design. Both
OSPF and EIGRP support route tagging, which facilitates this process.
Route maps can be used to add numeric tag-specific prefixes. The tag
information is passed along in routing updates and other routers can
filter routes that match or do not match specific tags. This can be
accomplished using route maps in distribution lists.
Note: The tag values of 110 and 90 used in Figure 5.6 above reflect
the ADs of OSPF and EIGRP. While this is not mandatory, it is a
recommended best practice when carrying out complex redistribution
because this technique helps to easily identify the original protocol
that generated each route.
Advanced EIGRP
EIGRP is a unique protocol because it uses a hybrid approach,
combining distance vector and link-state characteristics. Combining
these features makes EIGRP very robust and allows for fast
convergence, even in large topologies. The first thing a network
designer should consider is that EIGRP is a Cisco proprietary
protocol, so it can be used only in environments that contain Cisco
devices. Like RIPv2, EIGRP is a classless protocol and it allows for
VLSM. Another similarity between the two protocols is their
automatic summarization behavior, but this can be disabled just as
easily.
EIGRP Operations
EIGRP is the only IGRP that can perform unequal cost load balancing
across different paths. This is accomplished using the “variance”
command, which defines a tolerance multiplier that can be applied to
the best metric and that will result in the maximum allowed metric
Figure 5.7 – EIGRP Unequal Cost Load Balancing
Figure 5.7 above is an example in which there are two routes with a
cumulative metric of 100 to a destination and a route with a
cumulative metric of 200 to the same destination. By default, EIGRP
performs only equal cost load balancing, so it will send traffic across
only the first two links, which have the best metric of 100. If traffic
was sent over a third link, the variance should be set to 2, meaning the
maximum allowed metric is two times the lowest metric, equaling
200. Traffic will be sent proportionally to the metric, meaning for
each packet sent over the third link, two packets are sent over the first
two links because their metric is better.
EIGRP offers more flexibility than OSPF, which is more rigid and is
much more governed by operational and technical considerations. A
small- to medium-sized business with an arbitrary network
architecture can use EIGRP without many problems because the
network does not have to be restructured until scaling up to a larger
topology. However, if the topology reaches the point where there are
hundreds of routers, then EIGRP might become instable and present
long convergence times.
Adjusting the delay timers and tuning using variance, unlike one
might think, are suboptimal solutions when trying to achieve low
convergence times because this will become almost impossible to
perform as the network scales to hundreds of routers. Another
suboptimal way many organizations have tried to achieve low
convergence times is implementing multiple EIGRP ASs or process
numbers. Scaling two or three EIGRP ASs to limit EIGRP queries has
proven to be an inefficient solution.
The most efficient methods for limiting EIGRP queries and achieving
a scalable EIGRP design are as follows:
Advanced OSPF
The OSPF protocol is one of the most complex routing protocols that
can be deployed in modern networks. OSPF is an open-standard
protocol, whereas EIGRP is not. OSPF is a classless routing protocol
and this allows it to support VLSM. OSPF uses the Dijkstra SPF
algorithm to select loop-free paths throughout the topology, while
EIGRP uses DUAL. OSPF is designed to be very scalable because it
is a hierarchical routing protocol, using the concept of “areas” to split
the topology into smaller sections.
OSPF usually does not converge as fast as EIGRP but it does offer
efficient updating and convergence, as it takes bandwidth into
consideration when calculating route metrics (or costs). A higher
bandwidth generates a lower cost and lower costs are preferred in
OSPF. OSPF supports authentication, as does EIGRP and RIPv2, and
it is very extensible, as are BGP and IS-IS, meaning the protocol can
be modified in the future to handle other forms of traffic.
OSPF Functionality
Broadcast
Non-broadcast
Point-to-point
Point-to-multipoint
Point-to-multipoint non-broadcast
Loopback
OSPF does a good job of automatically selecting the network type that
is most appropriate for a given technology. For example, configuring
OSPF in a broadcast-based Ethernet environment will default to the
broadcast type; configuring OSPF on a Frame Relay physical interface
will default to the non-broadcast type; and configuring OSPF on a
point-to-point serial link will default to the point-to-point network
type.
Two network types that are never automatically assigned are point-to-
multipoint and point-to-multipoint non-broadcast. These are most
appropriate for partial mesh (hub-and-spoke) environments and must
be manually configured.
Virtual Links
If the backbone area is split into multiple pieces, virtual links can
ensure its continuity. A virtual link is an Area 0 tunnel that connects
the dispersed backbone areas. Virtual links are not considered a best
design practice but they can be useful in particular situations, like
company mergers, as depicted in Figure 5.9 below:
Note: In the scenario depicted in Figure 5.10, the virtual link is often
considered an extension of the non-transit area (Area 200 in this case) to
reach Area 0. This is not true because the virtual link is part of Area 0, so
in fact Area 0 is extended to reach the non-transit area (Area 200 in this
case).
Link-State Advertisements
LSAs that only flow within an area (intra-area routes): Types 1 and 2 (O)
LSAs that flow between areas (inter-area routes): Types 3 and 4 (O, IA)
External routes: Type 5 (E1/E2) or Type 7 (N1/N2)
OSPF offers the capability to create different area types that relate to the various
LSA types presented above and the way they flow inside a specific area. The
different area types are as follows:
Regular area: This is the normal OSPF area, with no restrictions in the
LSA flow.
Stub area: This area prevents the external Type 5 LSAs from entering the
area. It also stops Type 4 LSAs, as they are used only in conjunction with
Type 5 LSAs.
Totally stubby area: This area prevents Type 5, Type 4, and Type 3 LSAs
from entering the area. A default route is automatically injected to reach
the internal destinations.
Not so stubby area (NSSA): The NSSA blocks Type 4 and Type 5 LSAs but
it can connect to other domains and an ASBR can be in this area. The
NSSA does not receive external routes injected in other areas but it can
inject external routes into the OSPF domain. The external routes will be
injected as Type 7 LSAs. These Type 7 LSAs convert to Type 5 LSAs using
the NSSA ABR (the router that connects to the backbone), and they reach
other OSPF areas as Type 5 LSAs.
Totally not so stubby area (totally NSSA): This area has the same
characteristics as the NSSA, except that it also blocks Type 3 LSAs from
entering the area.
Note: All routers in an OSPF area must agree on the stub flag.
The various areas and LSA types are summarized in Figure 5.11 below:
Figure 5.11 – OSPF Areas and LSA Types
All of these areas and LSA types make OSPF a very hierarchical and
scalable routing protocol that can be tweaked and tuned for very large
environments based on all of these design elements. OSPF allows for
summarization, which can be carried out in two locations:
Scalable OSPF
Memory
CPU
Interface bandwidth
Routes can be summarized and the LSA database size and flooding
can be reduced in OSPF by using the “area range” command (internal
summarization) and the “summary-address” command (external
summarization), by implementing filtering, or by originating default
routes into certain areas. Different type of routes can also be filtered in
stub and totally stubby areas.
OSPF has several choices for the type of networks to use. Cisco
recommends avoiding the use of broadcast or Non-Broadcast Multi-
Access (NBMA) network types and instead using one of the following
types:
One of the things that can be done to obtain fast convergence in OSPF
includes using subsecond OSPF timers. This is implemented by
setting the dead interval to one second and using a hello multiplier to
designate how many hello packets will be sent in a one-second
interval. Using fast hellos is recommended in small- to medium-sized
networks but they should not be used in large networks. This can offer
fast convergence but it is not a scalable solution.
Advanced BGP
Necessity of BGP
BGP Functionality
Figure 5.16 above shows an example of BGP peering types. The BGP
peering types a route is being sent to and received from will influence
the update and path selection rules. An example of this is that eBGP
peers are assumed to be directly connected. If they are not, a special
command called “ebgp multihop” must be entered to let the devices
know they are not directly connected and to establish the BGP
peering. This assumption has no equivalent when considering iBGP
peering, where there is no requirement for direct connectivity.
The solution that involves a full mesh of iBGP peers is the least
preferred because of the increased number of connections. The total
number of connections is n*(n-1)/2, where “n” equals the number of
BGP routers, so for 1,000 routers there would be 499,500 peers. This
is very hard to implement and maintain, so the Route Reflector and
Confederations solutions are recommended instead. Details about
BGP Route Reflectors and BGP Confederations will be covered later
in this chapter.
BGP can use multiple attributes to define a routing policy and the
most important are as follows:
BGP systems will analyze all of these attributes and will determine the
best path to get to a destination based on this very complex decision
process. Only the best route is sent to the routing table and to the
peers. Finding out whether the next hop is reachable must be
determined first. If it is not, the prefixes will not have a best path
available, but if it is, the decision process will analyze the following
aspects:
Route Reflectors (RRs) are nodes that reflect the iBGP updates to
devices that are configured as RR clients. This solution is easy to
design and implement and solves the iBGP split horizon rule. A full-
mesh connection between RR nodes and normal nodes (non-RR
clients) must be configured, but a full-mesh connection between the
RR and its clients is not required. This concept is illustrated in Figure
5.17 below:
BGP RRs, defined in RF 2796, are iBGP speakers that reflect routes
that were learned from iBGP peers to RR clients. They also reflect
routes received from RR clients to other iBGP peers (non-RR clients).
Route reflector client configuration is carried out only on the RR
device, using the “neighbor <ip_address> route-reflector-client”
command. This configuration process can be performed incrementally
as more RRs or RR clients are added. The RR client will often
establish peer sessions only with RR devices.
BGP Confederations
IPv6 Routing
Cisco routers do not route IPv6 by default and this capability should
be activated with the “ipv6 unicast-routing” command. Cisco routers
are dual-stack capable by default, meaning they are capable of running
IPv4 and IPv6 simultaneously on the same interfaces.
IPv6 allows the use of static routing and it supports specific dynamic
routing protocols that are variations of the IPv4 routing protocols
modified or redesigned to support IPv6, such as the following:
RIPng, OSPFv3, and EIGRPv6 are new routing protocols that work
independently of the IPv4 versions and they run in a completely
separate process on the device. BGP and IS-IS are exceptions to this
rule, as they route IPv6 traffic using the same process used for IPv4
traffic, but they use the concept of address families that hold the entire
IPv6 configuration.
Many of the issues with IPv4 (e.g., name resolution and NBMA
environments) still exist with IPv6 routing. An important aspect is that
IPv6 routing protocols communicate with the remote link-local
addresses when establishing their adjacencies and exchanging routing
information. In the routing table of an IPv6 router, the next hops are
the link-local addresses of the neighbors.
RIPng, also called RIP for IPv6, was specified in RFC 2080 and is
similar in operation to RIPv1 and RIPv2. While RIPv2 uses the
multicast address 224.0.0.9 to exchange routing information with its
neighbors, RIPng uses the similar FF02::9 address and UDP port 521.
Another difference between the two versions is that IPv6 is configured
at the interface level while RIPv1 and RIPv2 are configured at the
global routing configuration level.
Chapter 5: Summary
Exterior routing protocols run between ASs (inter-AS) and the most
common example is BGPv4. The main reason different types of
routing protocols are used to carry routes outside of the AS boundaries
is the need to exchange a large amount of route entries.
Routers use Administrative Distance (AD) to select the best route
when multiple routing protocols advertise the same prefix. The AD
value represents how trustworthy a particular routing protocol is.
NAT blocks
Blocks used in redistribution
Blocks for management VLANs
Blocks for content services
Route filtering allows the control of network traffic flow and prevents
unwanted transit traffic, especially in situations that feature
redundancy or multiple paths. Route filtering protects against
erroneous routing updates and there are several techniques that can be
used in this regard. For example, with OSPF, if Core Layer
connectivity is lost, traffic should not be rerouted through a remote
site.
Using AD
Performing redistribution by moving the boundary routers in
small steps
The most efficient methods for limiting EIGRP queries and achieving
a scalable EIGRP design are as follows:
Open Shortest Path First (OSPF) protocol is one of the most complex
routing protocols that can be deployed in modern networks. OSPF is
an open-standard protocol, while EIGRP is not. OSPF functions by
using the Dijkstra SPF algorithm and by exchanging Link-State
Advertisements (LSAs) between neighbors. LSA types are as follows:
OSPF offers the capability to create different area types that relate to
the various LSA types presented above and the way they flow inside a
specific area. The different area types are as follows:
Regular area
Stub area
Totally stubby area
Not so stubby area (NSSA)
Totally not so stubby area (totally NSSA)
BGP can be used in transit networks (ISPs that want to provide transit
to other destinations on the public Internet) or in multi-homed
networks (big organizations that connect to multiple ISPs).
Two methods for scaling iBGP and avoiding the need for a full-mesh
topology are as follows:
Route Reflectors
Confederations
IPv6 allows the use of static routing and also supports specific
dynamic routing protocols that are variations of the IPv4 routing
protocols modified or redesigned to support IPv6:
RIPng
OSPFv3
EIGRPv6
IS-IS
BGP
Chapter 5: End of Chapter Quiz
a. Broadcast
route
b. Static route
c. Default route
d. Subnet route
2. What is a floating
static route?
b. A static route that has a lower AD than the same route learned via a
routing protocol
c. A static route that has a higher AD than the same route learned via a
routing protocol
d. A dynamic
route
e. A route to
Null0
a. Recursive
routing
b. Summary
routing
c. Default
routing
d. Dynamic
routing
a. Route
summarization
b. Default route
filtering
c. Dynamic route
filtering
d. Defensive
filtering
5. What is an
Autonomous System?
a. An ISP
network
b. The Internet
c. A large Enterprise
Network
d. A group of devices under a common
administration
a. True
b. False
7. Which of the following is the most efficient method for limiting the scope
of EIGRP queries?
a. Implementing a
small backbone area
b. Using a solid
summarization design
c. Avoiding stub
areas
a. CPU
b. Number of
ports
c. Memory
d. Production
date
e. Router
modules
f. Interface
bandwidth
a. IS-IS
b. IGRP
c. OSPF
d. EIGRP
e. RIPv2
10. The most simplified OSPF backbone area design involves grouping the
ABRs only in Area 0.
a. True
b. False
a. Protocol
version
b. Cost value
c. Administrative
Distance value
d. Metric value
a. True
b. False
a. BGP
b. CDP
c. BFD
d. ODR
14. One of the tools that should NOT be used when trying to achieve OSPF fast
convergence is implementing subsecond timers.
a. True
b. False
c. It uses small
update packets
a. True
b. False
c. A router that
aggregates routes
18. Which OSPF mechanism is used to ensure continuity when the backbone
area is split into multiple zones?
a. Flex links
b. Port channels
c. Virtual links
d. Backup links
19. Which of the following BGP mechanisms allow for protocol scalability and
are used to avoid creating a full-mesh topology within the AS (choose two)?
a. Route
aggregation
b. Confederations
c. Virtual links
d. Route
reflectors
20. Which of the following is the only true affirmation of Route Reflector
functionality regarding update propagation?
a. When an RR receives a route from an eBGP peer, it sends the route to all
clients and non-clients
Chapter 5 – Answers
1–c
2–c
3–a
4–d
5–d
6–a
7–b
8 – a, c, d
9–d
10 – a
11 – c
12 – b
13 – c
14 – b
15 – b
16 – a
17 – d
18 – c
19 – b, d
20 – a
Chapter 6: Designing Advanced WAN
Services
This chapter covers the following topics:
This chapter will first cover an overview of WAN design and will then
discuss the various optical link technologies used in enterprise
networking, including SONET, SDH, CWDM, DWDM, and RPR. It
will then define the Metro Ethernet service and its various flavors.
Next, it will cover Virtual Private LAN Service technologies and VPN
design considerations, followed by IP security for VPNs and the
popular MPLS VPN technology options. The chapter will finish with
a discussion on WAN design methodologies and various SLA
monitoring aspects related to WAN services.
Types of applications
Availability of applications
Reliability of applications
Costs associated with a particular WAN technology
Usage levels for the applications
WAN Categories
Note: Not too long ago, dial-up technology was the only way to
access Internet resources, offering an average usable bandwidth of
around 40 kbps. Nowadays, this technology is almost obsolete.
WAN Topologies
Optical Networking
Various optical link technologies are used in large enterprise
infrastructures to assure some kind of connectivity. Some of the
optical networking techniques include SONET/SDH, CWDM,
DWDM, and RPR.
SONET and SDH are popular connectivity solutions that ISPs can
offer based on customer needs. SONET/SDH uses Time Division
Multiplexing (TDM) as the technique for framing the data and voice
on a single wavelink through the optical fiber. Single digital streams
are multiplexed over the optical fiber using lasers or LEDs. SONET
typically uses fiber rings, although it is not exclusively based on a ring
topology, and it allows transmission distances of 80 km, without
having to use repeaters. Single-mode fiber can also obtain distances
up to 80 km without a repeater.
Some common Optical Carrier (OC) rates and the mapped bandwidth
for each standard are shown below. Various OC standards
correspond to various SONET and SDH standards. Synchronous
Transport Signal (STS) is a SONET standard and Synchronous
Transport Module (STM) is an SDH standard.
Some of the considerations that must be taken into account when new
SONET connections are being purchased include the following:
Details about the transport usage (whether the link will be used
for data or voice transport)
Details about the topology (linear or ring-based)
Details about single points of failure in the transport
Customer needs
Costs
Implementation scenarios (e.g., multiple providers, multiple
paths, etc.)
The type of oversubscription offered by the ISP
ISPs usually have the same physical fiber paths (i.e., along gas pipes
or public electrical paths). Even if dual ISPs are used, the physical
fiber path is often the same and the failure risk does not decrease. If
something happens to the pipes that have the fiber links attached, all
of the ISPs that follow that specific path will suffer. The
recommended scenario is having two ISPs with different physical
cabling paths.
Metro Ethernet
Metro Ethernet is a rapidly emerging solution that defines a network
infrastructure based on the Ethernet standard, as opposed to Frame
Relay or ATM, which is supported over a MAN. Metro Ethernet
extends MAN technology over to the Enterprise WAN at Layer 2 or
Layer 3.
Metro Ethernet technologies are not visible to customers but they are
responsible for provisioning these services across their core network
from the Metro Ethernet access ring. This represents a huge market for
the ISP because there are many customers that have existing Ethernet
interfaces, and the more customers know about their ISP and the core
network, the more informed they will be about the different types of
services they can receive and the problems that might happen with
those services. In these situations, it is critical to think about the
appropriate SLA for advanced WAN services provided to the
customer via Metro Ethernet.
Cisco offers a very scalable Metro Ethernet solution over SONET, but
it can also support switched Ethernet networks for the enterprise or for
the very popular IP Multiprotocol Label Switching (MPLS) networks
that will be analyzed later in this chapter. This offers multiple classes
of services and multiple bandwidth profiles supporting not only data,
voice, and video traffic but also storage applications. The Cisco
optical Metro Ethernet solution supports five Metro Ethernet Forum
(MEF) services:
Portal services
Monitoring services
Billing services
Subscriber database services
Address management services
Policy services
Point-to-point LAN extensions
Ethernet access to different storage resources
Connectivity to the data center
EMS is the multipoint version of EWS and it has the same type of
characteristics and technical requirements. In an EMS topology, the P
network is a virtual switch for the customers, so several customer sites
can be connected and can be offered any-to-any communication. This
is very useful, especially with the rapid emergence of multicasting and
VoIP services. The technology that allows this is called Virtual
Private LAN Service (VPLS) and this will be covered later in this
chapter.
EMS allows for rate limiting and service tiers based on distance,
bandwidth, and class of service. The CE device is usually a router or a
multilayer switch. Typically, ISPs will offer extensions of the
corporate or campus LAN to the regional branch offices, extending
the LAN over the WAN. This is a great disaster recovery solution, so
it can be integrated into the Disaster Recovery Plan (DLP) or the
Business Continuity Plan (BCP).
ERMS is a combination of EMS and ERS. This service offers the any-
to-any connectivity of EMS but it also provides the service
multiplexing of ERS. ERMS provides rate limiting and multiple
service tiers based on distance, bandwidth, and class of service. The
CE device is typically a high-end router, which is also used by the
technologies discussed previously. ERMS is used to support various
services, including:
Hierarchical VPLS
Point-to-point connections
GigabitEthernet connections
Ethernet rings
Ethernet over SONET (EoS)
Resilient Packet Ring (RPR)
Even though the VPN concept implies security most of the time,
unsecured VPNs also exist. Frame Relay is an example of this because
it provides private communications between two locations, but it
might not have any security features on top of it. The decision to add
security to the VPN connection depends on the specific requirements
for that connection.
Increased security
Scalability (can continuously add more sites to the VPN)
Flexibility (can use very flexible technologies like MPLS)
Cost (can tunnel traffic through the Internet without much
expense)
IP Security for VPNs
Site-to-site VPNs
Remote access VPNs
ISAKMP/IKE negotiation
Data transmission using negotiated IPsec SA
ISAKMP and IKE are initial secure negotiation channels used for the
initial exchange of parameters. The goals of the ISAKMP/IKE process
include the following:
When designing IPsec VPNs and providing all of the criteria for Phase
1 and Phase 2, a lifetime for the entire negotiation process must be
defined. ISAKMP and IPsec SAs have a definable lifetime and when
it expires, the processes at Phase 1 and Phase 2 will rerun. A shorter
lifetime means more security and more overhead, while a longer
lifetime implies less security and less overhead. If the process is alive
for a long period of time, a possible attacker has a long time to hack
the connection (i.e., solve the algorithms).
All of the details covered above confirm that IPsec VPNs are based on
a complex suite of technologies, and for each of these technology
areas, an appropriate level of security must be designed. It is
important to balance the SA lifetime and the number of IPsec tunnels
created against the router’s ability to handle these resource-intensive
processes and to always monitor the network devices to make sure
their resource utilization stays within normal limits.
Site-to-Site IPsec VPN Design
Interface selection
IKE configuration
Group policy lookup method
User authentication
Group policies
IPsec transformations
The most popular topology for site-to-site VPN solutions is the hub-
and-spoke topology. Cisco invented the Dynamic Multipoint Virtual
Private Network (DMVPN) as a technology that helps automate the
hub-and-spoke site-to-site VPN deployment. The peers can be
dynamically discovered and on-demand tunnels can be created to
assist with large hub-and-spoke VPN designs.
IPsec
GRE
Next Hop Resolution Protocol (NHRP)
GDOI
KSs
COOP KSs
Group members
IP tunnel header preservation
Group security association
Rekey mechanism
Time-based anti-replay (TBAR)
The MPLS label is positioned between the Layer 2 header and the
Layer 3 header. Using MPLS, overhead is added a single time, when
the packet goes into the ISP cloud. After entering the MPLS network,
packet switching occurs much faster than in traditional Layer 3
networks because the MPLS label is simply swapped instead of
having to strip the entire Layer 3 header. MPLS-capable routers, also
called Label Switched Routers (LSRs), come in two flavors:
With MPLS, there is a separation of the control plane from the data
plane, resulting in greater efficiency in how the LSRs work. Resources
that are constructed for efficiency of control plane operations, such as
the routing protocol, the routing table, and the exchange of labels, are
completely separate from resources that are designed only to forward
traffic in the data plane as quickly as possible.
Analyzing Figure 6.7 above, the MPLS label is 4 bytes in length and it
consists of the following fields:
Frame Mode MPLS is the most popular MPLS type, and in this
scenario the label is placed between the Layer 2 header and the Layer
3 header (for this reason, MPLS is often considered a Layer 2.5
technology). Cell Mode MPLS is used in ATM networks and it uses
fields in the ATM header as the label.
The MPLS devices need a way in which to exchange the labels that
will be utilized for making forwarding decisions. This label exchange
process is carried out using various protocols. The most popular of
these protocols is called Label Distribution Protocol (LDP). LDP is a
session-based UDP technology that allows for the exchange of labels.
UDP and multicast are used initially to set up peering, and then TCP
ensures that there is a reliable transmission of the label information.
The major factors that help determine the need for Layer 2 or Layer 3
MPLS VPNs include control and management. The amount of control
needed should be determined at the company level. This solution puts
the CE router under the domain of the ISP in the SLA. In a large
organization with a big pool of resources, including qualified network
engineers, the Layer 2 VPN solution might be best because this offers
total control of the Layer 2 policies and implementation.
The most important MPLS VPN issues and considerations include the
following:
Response time
Throughput
Reliability
Window size
Data compression
Window size influences the amount of data that can be sent in the
WAN in one “chunk.” Transmission Control Protocol (TCP) involves
using a sliding window concept that works by sending an amount of
data, waiting for an acknowledgement, and then increasing the amount
of data until the maximum window is reached. In the case of a
congested WAN link, everyone in the network that is sending data via
TCP will start increasing the rate at which they send it until the
interface starts dropping packets, forcing everyone to back off from
using the sliding window. When the congestion disappears, everyone
will again increase the rate at the same time until new congestion
occurs. This process, called TCP global synchronization, repeats again
and again and leads to a waste in bandwidth during the periods all of
the hosts decrease their window size simultaneously.
Services
Priorities
Responsibilities of the ISP and the organization
Guarantees
Warranties
The main resource regarding Cisco SLA concepts is the Cisco SLA
portal at www.cisco.com/go/saa, where several white papers with
information about implementing SLCs can be found:
SLA Monitoring
Ongoing diagnostics
Comparison between different service levels and results
Troubleshooting
Optimization
Chapter 6 – Summary
SONET and SDH are popular connectivity solutions that ISPs can
offer based on customer needs. SONET/SDH uses Time Division
Multiplexing (TDM) as a technique for framing the data and voice on
a single wavelink through fiber. Some of the considerations that must
be taken into account when new SONET connections are being
purchased include:
Details about transport usage (whether the link will be used for
data or voice transport)
Details about the topology (linear or ring-based)
Details about single points of failure in the transport
Customer needs
Costs
Implementation scenarios (multiple providers, multiple paths,
etc.)
The type of oversubscription offered by the ISP
Point-to-point connections
GigabitEthernet connections
Ethernet rings
Ethernet over SONET (EoS)
RPR
Site-to-site VPNs
Remote access VPNs
SLA monitoring should take into consideration the metrics that define
service baselines and that relate to the following:
Ongoing diagnostics
Comparison between different service levels and results
Troubleshooting
Optimization
a. Linear
b. Hub-and-spoke
c. Full mesh
d. Ad-hoc
e. Partial mesh
f. Random
a. True
b. False
3. What is the formula that can be used to calculate the total number
of connections necessary in a full-mesh topology with “n” devices?
a. n
b. n*(n-1)
c. n*n
d. n*(n-1)/2
a. SONET
b. MPLS
c. SDH
d. ATM
5. What is the number of channels on which CWDM is
transmitted?
a. 16
b. 160
c. 32
d. 128
6. SONET/SDH uses TDM for framing the data and voice traffic on a
single wavelink through fiber.
a. True
b. False
a. 16
b. 160
c. 32
d. 128
a. SONET
b. SDH
c. IPsec
d. RPR
e. MPLS
a. Frame Relay
b. SDH
c. SONET
d. Metro Ethernet
a. True
b. False
a. EPL
b. EMS
c. EWS
d. ERS
e. ERMS
12. The ISAKMP/IKE phase is composed of the ISAKMP SA setup and the
IPsec SA negotiation.
a. True
b. False
a. Full-mesh VPLS
b. Partial-mesh VPLS
c. Hierarchical VPLS
d. Hub-and-spoke VPLS
14. Which of the following are the main modes of operation for ISAKMP
phase 1 (choose two)?
a. Main mode
b. Normal mode
c. Slow mode
d. Aggressive mode
15. Which of the following are the main varieties of IPsec (choose
two)?
a. Full mesh
b. Site-to-site
c. ISAKMP
d. Remote access
16. The MPLS label is inserted between the Layer 2 and Layer 3
information.
a. True
b. False
17. Which of the following is a technology that helps automate the hub-
and-spoke site-to-site VPN deployment process?
a. GET VPN
b. DMVPN
c. Easy VPN
d. SSL VPN
18. In which of the following MPLS VPN types will the ISP actually be
involved in the routing process?
a. Layer 2
b. Layer 3
c. Layer 4
19. Which of the following MPLS VPN types has the advantage of
supporting multiple protocols (not just IP)?
a. Layer 2
b. Layer 3
c. Layer 4
a. Transition Contract
b. Support Contract
Chapter 6 – Answers
1 – b, c, e
2–a
3–d
4–a
5–a
6–a
7–b
8–d
9–d
10 – b
11 – b, e
12 – a
13 – c
14 – a, d
15 – b, d
16 – a
17 – b
18 – b
19 – a
20 – c
This chapter will first analyze various Enterprise Data Center design
considerations, including the Core and Aggregation Layers, Layer 2
and Layer 3 Access Layers, and data center scalability and high
availability. Next, it will cover SAN and Cisco Nexus technologies.
The chapter will end with a discussion on the Enterprise Data Center
architecture, including some considerations regarding virtualization
technologies.
At the time of their first appearance, data centers were centralized and
used mainframes to manage the data. Mainframes, in turn, were
managed using terminals, which are still used in modern data centers
because of their resiliency, but they are quickly becoming legacy
components.
Virtualization:
o Cisco Nexus 1000V virtual switch for VMware ESX delivers
per-virtual-machine visibility and policy control for SAN,
LAN, and unified fabric.
o Cisco Unified Computing System unifies the data center
resources into a single system that offers end-to-end
optimization for virtualized environments.
o Virtualization of SAN device contents help converge
multiple virtual networks.
o All of the features above lead to simplification in the
Enterprise Data Center architecture and a reduction in the
TCO.
o Unified fabric:
Unified fabric technologies include Fibre Channel
over Ethernet (FCoE) and Internet Small Computer
System Interface (iSCSI) and they usually offer 10
Gbps transfer rates.
Unified fabric is supported on Cisco Catalyst and
Nexus series (iSCSI). Cisco MDS storage series are
designed and optimized to support iSCSI.
Converged network adapters are required for FCoE.
FCoE is supported on VMware ESX.
Unified computing:
Cisco introduced the Unified Computing
System (UCS) as an innovative next-generation
Enterprise Data Center platform that converges
virtualization, processing, network, and storage
into a single system.
Unified computing allows the virtualization of
network interfaces on servers.
Unified computing increases productivity with
temporal provisioning using service profiles.
Figure 7.2 above illustrates the Enterprise Data Center topology. From
top to bottom, the top layer includes virtual machines that are
hardware abstracted into software entities running a guest OS on top
of a hypervisor (resource scheduler). Unified computing resources
comprise the next layer, which contains the service profiles that map
to the identity of the server and provides the following details:
Memory
CPU
Network interfaces
Storage information
Boot image
Server Considerations
Required power
Rack space needed
Server security
Virtualization support
Server management
The facility, spacing, and other considerations for the Enterprise Data
Center will determine where to position the equipment to provide
scalability. For example, the space available will determine the
number of racks for servers and the network equipment that can be
installed. An important factor that must be considered is floor loading
parameters.
Estimating the correct size of the data center will have a great
influence on costs, longevity, and flexibility. An oversized data center
will result in unnecessary costs, while an undersized data center will
not satisfy computing, storage, and networking requirements, but will
impact productivity. Factors that must be considered include the
following:
Available space
The number of servers
The amount of storage equipment
The amount of network equipment
The number of employees served by the Data Center
infrastructure
The space needed for non-infrastructure areas: storage rooms,
office space, and others
The weight of the equipment
Floor load (this determines how many devices should be
installed)
Heat dissipation
Cooling capacity (considering required temperature and
humidity levels)
Cabling needs
Power capacity (including consumption, type, and UPS/PDU)
The power source in the Enterprise Data Center will be used to power
servers, storage, network equipment, cooling devices, sensors, and
other additional systems. The most power-consuming systems are
servers, storage, and cooling. The process of determining the power
requirements for the data center equipment is difficult because of the
many variables that must be taken into consideration. Power usage is
greatly impacted by the server load.
Servers
Storage
Network devices
UPS
Generators
HVAC
Lighting
Other cooling techniques that can be used for equipment that does not
exhaust heat to the rear include:
The main advantages of using fibre-optic cables are they are less
susceptible to external interferences and they can operate over greater
distances than copper cables can. The main disadvantages of using
fibre-optic cables are specific adapters might be necessary when
connecting to device interfaces and they are more difficult to install
and repair. The cabling must be well organized for ease of
maintenance in the passive infrastructure. Cabling infrastructure
usability and simplicity is influenced by the following:
Number of connections
Media selection
Type of cabling termination organizers
Difficult troubleshooting
Downtimes
Improper cooling
Flexibility
Maintainability
Resilience
Performance
Scalability
Low-latency switching
10-GigabitEthernet connections
Scalable IP multicast support
The Enterprise Data Center Access Layer usually offers Layer 2 and
Layer 3 Access Layer services, including:
Separating the Enterprise Data Center Core Layer and the Enterprise
Campus Core Layer will isolate the Enterprise Campus Distribution
Layer from the Enterprise Data Center Aggregation Layer because
different types of policies and services might be provided at the
Enterprise Campus Distribution Layer than at the Enterprise Data
Center Aggregation Layer. This is important in terms of
administration and policies (e.g., QoS, troubleshooting, maintenance,
and security features) because there may be one team dealing with the
Enterprise Campus Distribution Layer and another team dealing with
the Enterprise Data Center Aggregation Layer. In addition, the
possibilities for scalability and future growth should be considered.
Having a separate Enterprise Data Center Core Layer makes it easier
to expand into the Aggregation Layer in the future.
Regarding the traffic flow across the data center, data sessions move
between the Enterprise Campus Core Layer and the Enterprise Data
Center Aggregation Layer submodule that holds all of the services
discussed earlier, so the Enterprise Data Center Core Layer is actually
combining the Aggregation Layer’s traffic flows into optimized paths
back to the Enterprise Campus Core Layer. In addition, server-to-
server traffic (e.g., frontend to backend server traffic) will usually be
contained within the Aggregation Layer submodule. However,
replication traffic and backup traffic could travel between the
Aggregation Layer services through the Enterprise Data Center Core
Layer.
OSPF and EIGRP are two of the routing protocols recommended for
data center environments because of their ability to scale a large
number of routers and achieve fast convergence times.
OSPF Design Recommendations
Firewalls
IPS/IDS
Monitoring
SSL offloading
Content switching
Network analysis
Load balancing
The Aggregation Layer will also provide the primary and secondary
default gateway addresses for all of the servers in the Access Layer.
Most of the time, this is accomplished using a First Hop Redundancy
Protocol like HSRP or GLBP to ensure redundancy. The Aggregation
Layer will also leverage some of the new virtualization and integrated
services, such as virtual firewalls and security contexts in ASA
security modules, virtual sensors on the IPS sensors, and server load
balancing functionality.
Implementing failover techniques in the Aggregation Layer should
also be considered, either in an active/active or active/standby
configuration. The active/standby redundancy technique is
recommended for content switching services or when dealing with old
Catalyst firewall services. When using ASA modules with security
contexts, the recommended approach is having active/active failover.
Layer 2 Design
The Layer 2 looped design is the most commonly used Layer 2 design
and it comes in two forms:
One of the main differences between the triangle loop topology and
the square loop topology, as shown in Figures 7.7 and 7.8 above,
respectively, is the trunk link between the Access Layer switches,
which will involve leveraging 10-GigabitEthernet port density on the
Aggregation multilayer switches. The advantage of this option is that
the square design will accommodate more Access Layer switches, so
this topology design might have to be considered in an environment
with many Access Layer switches. In the triangle loop design
scenario, the primary Aggregation Layer device will still need to be
used to offer active services (e.g., STP Root Bridge and HSRP),
connecting via an 801.1Q trunk to the standby Aggregation Layer
device.
Layer 2 protocols like RSTP and MSTP can be used to ensure fast
convergence and additional Layer 2 features such as BPDU Guard,
Root Guard, Loop Guard, and UDLD can also be used. Using the
square loop design will achieve active/active failover at the
Aggregation Layer, which provides great benefits. The square loop
design is not as common as the triangle loop design, but it is being
used more and more for Enterprise Data Center solutions.
The Layer 2 FlexLink topology is the third major design type and it is
an alternative to looped Access Layer technology. FlexLink
technology offers an active/standby pair of uplinks on a common
Access Layer switch, so this design type involves flexible uplinks
(dotted lines) going up from the Access Layer switches and
connecting to the Aggregation Layer devices, as depicted in Figure
7.11 below:
The failover from the active to the standby link occurs in a one- to
two-second time period so this technology does not provide a
convergence time comparable to the one offered by RSTP. FlexLinks
operate using only a single pair of links and the Aggregation Layer
switches are not aware of the FlexLink configuration, so from their
perspective the links are up and the STP logical and virtual ports are
active and are allocated. FlexLinks are supported on a wide variety of
Cisco platforms, as long as they are using IOS release 12.2 or higher.
However, FlexLinks are not used very often and are an option suitable
only for small- to medium-sized organizations. Since FlexLinks
disable STP, the possibility for Layer 2 loops exists in certain
scenarios. In addition, Inter-Switch Link (ISL) scaling issues must be
considered in FlexLink environments, as opposed to situations using
802.1Q.
The table below, which has been provided by Cisco, summarizes the
main advantages of the various Access Layer designs presented
previously.
On the other hand, a Layer 3 Access Layer design offers the following
benefits:
Pass-through model
Integrated Ethernet switches model
Usually, blade servers are used to replace older server farms that used
towers or other types of rack-mount servers, where density is a big
issue. Another situation in which blade servers are needed is when
new high-end applications that require clustering are used. With blade
servers, the data center managers can actually lower their TCO and
save a great amount of rack space. Blade servers represent a huge area
of the server market and various vendors, such as Cisco, Dell, HP, and
IBM, offer this type of solution.
Implementing blade server farms presents some interesting
challenges:
One of the key issues with scalability involves the uplinks and
oversubscription. The correct amount of uplink bandwidth on the
Access Layer switches should be known. The oversubscription ratio
can be calculated by determining the total number of server
GigabitEthernet connections and then dividing that number by the
total aggregated uplink bandwidth on the Access Layer switch. An
example of this is a Cisco Catalyst 6509 switch with four 10-
GigabitEthernet uplink connections, which supports 336 server Access
Layer ports. A 40-GigabitEthernet uplink bandwidth for 336 server
ports implies an 8:4:1 ratio.
The STP design is a huge issue because this is the feature that
helps deal with a large number of VLANs in the organization and
it will determine the ability to extend the VLANs across the data
center. A large number of data centers may have to be
consolidated into a few data centers to meet the Layer 2
implementation needs.
Determining the need for VLAN extension in all areas of the
network. Some scenarios might require access to everything
from everywhere and this should be determined in the
Enterprise Data Center design.
Determining the number of necessary Access Layer switches
and the total number of possible Access Layer switches
supported by the Aggregation Layer. Port density is a key issue
here.
Determining whether to use RSTP or Multiple Spanning Tree
(MST) as the Spanning Tree Protocol mechanism. They both
have quick convergence times but RSTP is the most common.
MST is usually used in ISP environments but it is not very
common in the Enterprise Data Center because it has issues
with some service modules (e.g., ASA in transparent mode). If
STP scaling issues cannot be supported with RSTP, MST should
be implemented because it supports a large number of VLANs
and port instances and it is used in some of the largest data
centers in the world.
Preventing and solving scalability issues by adding Aggregation
Layer services (more than two Aggregation Layer services can
be used, as mentioned in previous sections).
Performing manual pruning on trunk links to reduce the total
number of necessary active logical and virtual port instances.
Determining the optimal ratio of VLAN to HSRP instances. There
is typically a 1:1 ratio of VLANs to HSRP instances and Cisco
recommends having a maximum of 500 HSRP instances on the
Catalyst 6500 supervisor engine 720. Sometimes, the number of
VLANs will be determined by the number of HSRP instances.
Host Bus Adapters (HBAs), which are very fast adapters that
connect to the disk subsystem. HBAs connect via
GigabitEthernet or 10-GigabitEthernet copper or optical links.
The communication subsystem (this evolved from IDE and ATA
connectivity to Fibre Channel communications), which allows a
large number of devices to connect to the SAN at higher speeds
over long distances.
The actual storage system, which consists of an array of hard
drives that can be considered “just a bunch of disks” (JBOD).
Storage arrays are groups of devices that provide mass storage
and the physical disk space can be virtualized into logical
storage structures.
RAID 0:
o Functions by striping across multiple disks
o Ensures increased performance
o Offers no redundancy
o RAID 1:
Functions by mirroring the contents on multiple
disks
Usually offers the same performance as a single disk
Offers increased redundancy
RAID 3: Allows for error detection
RAID 5: Allows for error correction and performance
enhancement
Cisco Nexus
Nexus devices run NX-OS, which is based on the Cisco Storage Area
Network Operating System (SAN-OS) and which operates on
different platforms, including:
Nexus 1000V
Nexus 2000
Nexus 4000
Nexus 5000
Nexus 7000
Cisco MDS 9000
Cisco Unified Computing System (CUCS)
Some of the most important enhancements NX-OS has over IOS from
a configuration standpoint include the following:
The login process places the user directly into EXEC mode.
Because of the platform’s modularity, features can be
completely enabled and disabled using the “feature” command
(this also enhances security).
Commands can run from any command mode.
The interfaces are labeled simply as “Ethernet.”
The default STP mode is Rapid-PVST+.
The “write memory” and “alias” commands have been removed.
Virtualization
Virtualization Considerations
Virtual machines
Virtual switches
Virtual local area networks
Virtual private networks
Virtual storage area networks
Virtual switching systems
Virtual routing and forwarding
Virtual port channels
Virtual device contexts
Cisco ASA
Cisco ACE
Cisco IPS
Cisco Nexus series
Server Virtualization
Memory and CPU performance are the primary hardware factors that
need to be considered in a CUCS environment, as these factors can
become bottlenecks for the solution. Cisco has invented the Extended
Memory Technology incorporated in some CUCS platforms that
allows the mapping of physically distinct DIMMs to a single logical
DIMM, as seen by the processor. This eliminated the memory
bottleneck issues, as extended memory servers with a large number of
DIMMs can provide hundreds of Gigabits of memory that can be
mapped to a single resource.
Chapter 7 – Summary
Flexibility
Maintainability
Resilience
Performance
Scalability
The Enterprise Data Center Access Layer usually offers Layer 2 and
Layer 3 Access Layer services, including:
OSPF and EIGRP are two of the routing protocols recommended for
use in data center environments because of their capabilities to scale a
large number of routers and achieve fast convergence times. The
Layer 2 access design approach includes three categories:
The STP design is a huge issue because this is the feature that
will help deal with a large number of VLANs in the organization
and it will determine the ability to extend the VLANs across the
Enterprise Data Center.
Determining the need for VLAN extension in all areas of the
network.
Determining the number of necessary Access Layer switches
and the total number of possible Access Layer switches
supported by the Aggregation Layer.
Determining whether to use RSTP or MST as the Spanning Tree
Protocol mechanism.
Preventing and solving scalability issues by adding Aggregation
Layer services.
The possibility of performing manual pruning on trunk links to
reduce the total number of necessary active logical and virtual
port instances.
Determining the optimal ratio of VLAN to HSRP instances.
RAID 0 (striping)
RAID 1 (mirroring)
RAID 3 (error detection)
RAID 5 (error correction)
Cisco Nexus technology was developed to unify the LAN and SAN
infrastructures. While most data centers use separate LANs and SANs
with separate switches and network adapters in the servers, Cisco
Nexus technology allows for the implementation of a unified fabric
Enterprise Data Center architecture, with a consolidated LAN and
SAN at the Access Layer.
Cisco’s line of products for server virtualization is called the Cisco
Unified Computing System (CUCS), and one of the major
advancements Cisco introduced in this area is a consistent I/O that
provides uniform support for hypervisors across all servers in a
resource pool. CUCS products facilitate advanced management and
configuration of the throughput across all of the servers.
a. Flexibility
b. Easy to implement
c. Resilience
d. Scalability
f. Performance
a. True
b. False
a. Mainframe services
b. Provides STP processing
c. Provides redundancy
d. Layer 2 access
f. Blade chassis
integration
a. Loop designs
b. Circle designs
c. Linear designs
d. Loop-free
designs
e. FlexLink designs
a. True
b. False
b. Loop-free U
c. Loop-free
square
d. Loop-free inverted U
d. Layer 3 design
10. The Enterprise Data Center Core Layer is the same as the
Enterprise Campus Core Layer.
a. True
b. False
a. True
b. False
a. CDP
b. AFT
c. ALB
d. STP
e. ABL
f. SFT
a. SHA
b. BHA
c. HBA
d. HSA
a. RAID 0
b. RAID 1
a. True
b. False
17. Which of the following are the most important SAN design
considerations (choose two)?
a. Port density
b. Physical location
c. SAN vendor
d. Topology design
a. Square topology
a. vEC
b. vPC
c. LACP
d. VDC
b. Cisco ACE
c. Cisco UCS
d. Cisco ASA
Chapter 7 – Answers
1 – a, c, d, f
2–a
3 – b, c, e
4–c
5 – a, d, e
6–a
7 – b, d
8–c
9–d
10 – b
11 – a, c, d
12 – a
13 – b, c, f
14 – c
15 – b
16 – a
17 – a, d
18 – c, d
19 – b
20 – c
Redundancy
Technology
People
Processes
Tools
Redundancy
Technology
Startup configurations
Startup variables
Running configuration
Layer 2 protocol states for ports and trunks
Layer 2 and Layer 3 tables (e.g., FIB, adjacency tables, and MAC
tables)
Access Control Lists
QoS tables
People
Reliable
Communicative
Consistent
Skilled in creating documentation
Skilled in troubleshooting
Some of the most important actions and tools that should be used in an
e-commerce environment include:
Performance monitoring
Performance reporting
Reporting on trends (observing anomalies from baselines)
Observing traffic drops, packet loss, latency, and jitter
Investigating case studies before launching an e-commerce
solution and perhaps implementing a prototype in a lab
environment
Documentation tools, including high-level and low-level
network diagrams. Documentation should be kept up to date to
reflect the most recent e-commerce submodule architecture
and it should include reasons for choosing a particular network
design and implementation steps. All of the devices,
configuration, addressing information, and virtualization
aspects should be documented.
Web tier
Application tier
Database tier
The area containing the Aggregation Layer switches and the Access
Layer devices is considered the Web tier, which communicates with
the application tier through the second layer of firewalls, and the
application tier connects to the database tier through another layer of
firewalls (the third firewall layer). Again, the firewall sandwiching
mechanism with its three layers of firewalls can be seen in Figure 8.1
above.
Figure 8.2 – E-Commerce Submodule Design Using Application Gateways
Once the traffic gets past the Core Layer and the Aggregation Layer
into the access zone (the e-commerce LAN), in some architectures all
of the traffic between the firewall layers will actually go through
servers or will use servers as application gateways. The application-
specific gateway methodology (see Figure 8.2 above) increases
security because possible attackers must penetrate the firewall, as well
as the Web server operating system, to get to the middle layer of
firewalls. This method avoids operating firewalls from multiple
vendors.
Routed mode
Transparent mode
The SLB market is quickly evolving, and over the past few years,
Cisco has had a few product lines that have offered SLB and Content
Load Balancing (CLB) services, including:
Router mode
Bridge mode
One-armed or two-armed mode
The load balancer device routes between the public and the private
networks and the servers will use the load balancer’s inside address as
their default gateway. Since the replies that come back from the
servers (responses to clients) pass through the load balancer and it
changes the server IP address to the appropriate address, the end-users
do not know there is a load balancer device in the path. This process is
similar to NAT technology. SLB router mode is easy to deploy, works
with many server subnets, and is the recommended mode to be used
for the majority of appliance-based SLB.
SLB bridge mode is also called the “inline” mode and it works by
having the load balancer device operate as a transparent firewall. In
this situation, an upstream router is needed between the clients and the
load balancer, as shown in Figure 8.5 below:
Figure 8.5 – Server Load Balancing Design – Bridge Mode
This design method is seen most often with integrated load balancers,
such as the Cisco Content Switching Module or the Application
Control Engine in a 6500 or 7600 chassis. However, if these load
balancers are deployed in a redundant configuration, the network
designer must be aware of the implications of STP and how it will
affect the devices and the backend servers. It is typically easier to
configure the load balancer device in SLB routed mode because
troubleshooting STP can be very complicated.
The edge devices translate addresses using NAT to a block that both
ISPs will advertise for the company’s site. The edge routers will most
likely be advertising the specific block to both ISPs using BGP.
This design applies to very large e-commerce sites that offer mission-
critical services, such as banks, brokerage firms, and other large
organizations. The distributed data centers design goes beyond
providing two chassis for failover flexibility; it involves two sites that
offer high availability and lower uptime requirements for individual
sites
Web VLAN
Aggregation VLAN
Access VLAN
The Core Layer usually contains Cisco Catalyst 6500 devices and
ensures redundant connections to the ISP. The Core Layer Cisco 6500
chassis usually contains firewall service modules that move the
firewall perimeter to the Core Layer, making the Aggregation Layer a
trusted zone.
The Aggregation Layer supports the SLB submodules, and the default
gateway for the e-commerce servers is the virtual IP address on those
submodules, meaning all of the e-commerce traffic will pass through
the SLB submodules at the Aggregation Layer. The Access Layer
switches are usually Layer 2 devices and they connect the Web
servers, the application servers, and the database servers.
This design will use the same VLAN separation presented before (i.e.,
separate Web, application, and database VLANs), and the server’s
default gateway will still be the primary IP address on the
Aggregation Layer devices. The major difference from previous
design models is that policy-based routing or client-side NAT must be
used to redirect the outbound server traffic.
BGP tuning
Enhanced Object Tracking (EOT)
Performance Routing (PfR)
DNS tuning and Global Server Load Balancing (GSLB)
All of the ingress and egress traffic from all of the points in the
network to the ISP should be analyzed and this should be correlated
with failover behavior. With a single ISP, the Multi-Exit
Discriminator (MED) BGP attribute or BGP communities can be used
to communicate preferences for traffic flow from the Internet to the
organization. It is important to ensure that the ISPs are always
updating the route filters, and traffic monitoring and periodic testing
should be performed to make sure that the prefixes have not been
accidentally filtered.
Prefix performance
Link load distribution
Link cost
Traffic type
Learn
Measure
Apply policy
Enforce
Verify
Chapter 8 – Summary
Redundancy
Technology
People
Processes
Tools
Reliable
Communicative
Consistent
Skilled in creating documentation
Skilled in troubleshooting
Performance monitoring
Performance reporting
Reporting on trends (observing anomalies from baselines)
Observing traffic drops, packet loss, latency, and jitter
Investigating case studies before launching an e-commerce
solution, and perhaps implementing a prototype in a lab
environment
Documentation tools
Web tier
Application tier
Database tier
Router mode
Bridge mode
One-armed or two-armed modes
Base design
Dual layer firewall design
SLB one-armed mode with dual layer firewall design
SLB one-armed mode with security contexts
Network designers should be aware of different methods for tuning the e-commerce
submodule. The most common optimization mechanisms include:
BGP tuning
Enhanced Object Tracking
Performance Routing
DNS tuning and Global Server Load Balancer
a. Flexibility
b. Redundancy
c. Vendors
d. Tools
e. Processes
f. Technology
a. True
b. False
a. HSRP
b. EOT
c. NSF
d. BGP
a. SSO
b. HSRP
c. QoS
d. NSF
e. EOT
a. True
b. False
a. SSO
b. NSF
c. HSRP
d. PfR
e. EOT
d. Documentation tools
a. Core
b. Presentation
c. Aggregation
d. Access
e. Session
a. True
b. False
a. Web
b. Application
c. Session
d. E-commerce
e. Database
a. True
b. False
a. Fake firewalls
b. Security contexts
c. Transparent firewalls
d. Redundant
contexts
14. What are the most common firewall deployment
modes (choose two)?
a. One-armed mode
b. Inline mode
c. Routed mode
d. Two-armed mode
e. Transparent mode
a. ASA
b. Aggregation Layer
switch
c. Server gateway
a. True
b. False
b. Three-armed
mode
c. Two-armed mode
d. DMZ mode
e. One-armed mode
f. Bridge mode
a. Bridge mode
b. Routed mode
c. One-armed mode
d. Two-armed mode
c. IP routing metrics
d. IP routing state
e. Line protocol
a. Link load
distribution
b. Prefix age
c. Link cost
d. Device type
e. Traffic type
Chapter 8 – Answers
1 – b, d, e, f
2–a
3–d
4–c
5–e
6–b
7–d
8 – a, b, d
9 – a, c, d
10 – a
11 – a, b, e
12 – b
13 – b
14 – c, e
15 – d
16 – a
17 – a, c, e, f
18 – a
19 – c, d, e
20 – a, c, e
Firewall designs
NAC appliance designs
IPS/IDS designs
Remote access VPN designs for the teleworker
Integrity
Availability
A DoS attack will result in the resource being overloaded (e.g., disk
space, bandwidth, memory, buffer overflow, or queue overflow) and
this will cause the resource to become unavailable for usage. This can
vary from blocking access to a particular resource to crashing a
network device or server. There are many types of DoS attacks, such
as ICMP attacks and TCP flooding.
Spoofing Attacks
Telnet Attacks
Other related threats in this area are generated using old unsecured
protocols like rlogin, rcp, or rsh that allow access to different systems.
These unsecured protocols should be replaced by protocols like
Secure Shell Protocol (SSH) or Secure File Transfer Protocol (SFTP).
Password-Cracking Programs
Viruses
The replication process can spread across hard disks and it can infect
the entire operating system. Once a virus is linked to an executable
file, it will infect other files every time the host file is executed. There
are three major types of viruses, defined by where they occur in the
system:
MBR and boot sector viruses affect the boot sector on the physical
disk, rendering the operating system unable to boot. File viruses
represent the most common type of viruses and they affect different
types of files.
Stealth viruses
Polymorphic viruses
Stealth viruses use different techniques to hide the fact that a change
to the disk drive was made. Polymorphic viruses are difficult to
identify because they can mutate and they can change their size, thus
avoiding detection by special software. When using these virus
detection programs, the recommendation is to make sure they are
updated as often as possible so they are capable of scanning for new
forms of viruses.
Reconnaissance attacks
DoS and DDoS attacks
Traffic attacks
ICMP scanning
SNMP scanning
TCP/UDP port scanning
Application scanning
The scanning procedure can use simple tools (e.g., Ping or Telnet) but
it can also involve using complex tools that can scan the network
perimeter for vulnerabilities. The purpose of reconnaissance attacks is
to find the network’s weaknesses and then apply the most efficient
type of attack.
Application Vulnerabilities
The applications and individual host machines are often the ultimate
target of the attacker or the malicious user. Their goal is to gain access
to permissions so they can read sensitive data, write changes to the
hard drive, or compromise data confidentiality and integrity.
Designing Firewalls
Virtual Firewalls
Security policies
Assigned interfaces
NAT rules
Access Control Lists
Administrators
IPsec VPN
Secure Sockets Layer (SSL) VPN
Dynamic routing protocols
Context 1 is the active firewall for the left device (FW 1).
Context 3 is the standby firewall for the left device (FW 1).
Context 2 is the standby firewall for the right device (FW 2).
Context 4 is the active firewall for the right firewall device (FW
2).
This scenario provides two virtual firewalls, with the physical devices
each being partitioned in one active and one standby context. Contexts
1 and 3 in Figure 9.4 are logically grouped across the two physical
devices, just like Contexts 2 and 4 are. In many situations, the security
contexts feature will serve only as a mechanism to create active/active
failover configurations, but on high-end devices, this functionality can
be used for both creating active/active topologies and assigning a set
of VLANs to each virtual firewall.
Private VLANs
To build a good trust model between servers and the DMZ, consider
separating the servers so that if one of the servers is compromised
(e.g., by a worm or a Trojan attack), it will not affect other servers in
the same subnet. PVLANs function by creating secondary VLANs
within a primary VLAN. The secondary VLANs can be defined based
on the way the associated port is defined on the switch:
Note: Promiscuous ports are those that allow the sending and
receiving of frames from any other port on the VLAN and they are
usually connected to a router or other default gateway.
As illustrated in Figure 9.5 above, the primary VLAN usually
represents the server farm and this can be divided into secondary
VLANs for private traffic flows. Secondary VLANs are logical
VLANs within a VLAN and they are similar to subinterfaces on a
physical interface. Community ports can communicate with ports
configured with the same community or with promiscuous ports but
they cannot communicate with ports from other communities or with
isolated ports. On the other hand, isolated ports can communicate only
with the upstream promiscuous ports to reach the default gateway.
Zone-Based Firewalls
Trusted to DMZ
DMZ to Trusted
Trusted to Untrusted
Untrusted to Trusted
DMZ to Untrusted
Untrusted to DMZ
After the zone pairs are defined, different policies can be applied to
them, and once the modular policies have been created and the zone-
pair relationships have been defined, other interfaces are placed into
that zone and the policy applies to it automatically. Using zone-based
firewalls no longer requires having just one ACL per interface per
direction per protocol to provide security policies.
Modularity
Flexibility
Granularity
NAC Services
The modern Cisco NAC solution includes the Identity Service Engine
(ISE), which uses Cisco network devices to extend access
enforcement throughout a network and consists of the following
components:
Improved visibility and control over all user activity and devices
Consistent security policy across the enterprise infrastructure
Increased efficiency by automatic labor-intensive tasks
When placing the IPS sensor on the Enterprise Network, there are
several options to choose from, as depicted in Figure 9.8 above:
Advanced VPN
The most common VPN solutions deployed in modern networks
include:
SSL VPN
IPsec VPN
DMVPN
GET VPN
Clientless access
Thin client (port forwarding) access
Full tunnel access
Clientless access mode involves the user accessing corporate
resources through a Web browser that supports SSL certificates on
different operating systems (e.g., Windows, Mac OS, or Linux). The
user does not need to install any software client, as it has access to
Web-enabled applications and file sharing services (using NFS or
CIFS). The gateway performs address and protocol conversion,
content parsing, and rewriting.
The thin client access method behaves differently than the clientless
access mode in that it uses a Java applet and performs TCP port
forwarding so other services can be used in addition to Web-enabled
applications. TCP port forwarding with static port mapping extends
application support beyond Web-enabled applications, so SSH, Telnet,
IMAP, POP3, and other protocols can be used.
The full tunnel access mode offers extensive application support and
user access by downloading either the Cisco SSL VPN Client or the
newer Cisco AnyConnect Client. The VPN clients can be loaded
through Java or ActiveX and they can operate in two modes:
Full tunnel access mode offers the most access of all access methods
because it supports all IP-based applications, not just TCP or HTTP as
clientless access mode does. This functions much like IPsec remote
access VPN.
The next issue network designers must analyze involves the places
where the VPN devices and the firewalls will be installed. Unlike
traditional designs, which include separate VPN and firewall devices,
in modern networks the VPN concentrator and firewall are usually
integrated into the same platform, such as an ASA device.
As depicted in Figure 9.9 above, the parallel design places the firewall
logically in parallel with the VPN appliance, behind the edge router.
The enterprise servers and services will be placed behind the
VPN/firewall layer. The exact design depends on the submodules in
use (e.g., e-commerce, WAN services, etc.); however, the firewall
policies need to limit traffic that comes into the VPN termination
device.
With the inline option, the SSL VPN gateway/concentrator and the
firewall are also placed behind the router and in front of the servers,
but the difference from the previous model is that the firewall and
VPN devices are placed in line, as depicted in Figure 9.10 above. This
is a viable option that is suitable for small- to medium-sized
businesses.
Cisco 7600/6500
Site-to-site IPsec VPN is an overlay solution that can be implemented
across multiple types of networks:
Some of the most popular designs for placing the VPN devices on the
network include:
Parallel design
DMZ design
Integrated design
The parallel design, depicted in Figure 9.12 above, implies placing the
firewall and the VPN concentrator between the edge router and the
Enterprise Campus module. The VPN device also connects to the IPS
sensor, which usually has redundant connections to the LAN switch.
The main advantages of this solution are it is easy to implement and it
provides a high availability scenario that can include active/active or
active/standby failover mechanisms. Another advantage is the lack of
a centralized location to perform logging and content inspection. The
major disadvantage is that the IPsec decrypted traffic is not inspected
by the firewall.
Firewall services
IPS services
Site-to-site and remote access VPN termination services
In many modern networks, the spoke sites are connected using DSL
and cable low-cost subscriptions, where the ISP dynamically assigns
them IP addresses. DMVPN can automatically create tunnels between
these sites, even though many devices are behind NAT or PAT.
The key servers at the hub site automatically manage different types
of keys (e.g., TEK or KEK), and secure communication channels are
established with the spoke sites without using tunnels. The remote
sites can register or refresh their keys with the closest server in the
group and the re-keying process is performed on a regular basis,
according to the IPsec policy and the security association validity
period.
Security Management
Reconnaissance
Unauthorized access
Denial of Service
The Cisco network designer may or may not have a role in creating
the corporate security policy. Every organization, regardless of size,
should have some form of written security policies and procedures,
along with a plan to enforce those policies and a disaster and recovery
plan.
1. 1. Risk assessment
2. 2. Determine and develop the policy
3. 3. Implement the policy
4. 4. Monitor and test security
5. 5. Re-access and re-evaluate
Security policy
Acceptable use policy
Personnel policies and procedures
Access control policy
Incident handling
Disaster recovery plan
The acceptable use policy and the personnel policies and procedures
cover the ways in which individual users and administrators use their
access privileges. The access control policy involves password and
documentation control policies, and incident handling describes the
way a possible threat is handled to mitigate a breach in the
organization’s security. The disaster recovery plan is another
document that should be included in the security policies and it should
detail the procedures that will be followed in case of a total disaster,
including applying backup scenarios.
Physical security
Authentication
Authorization
Confidentiality
Data integrity
Management and reporting
Guidelines
Processes
Standards
Acceptable use policies
Architectures and infrastructure elements used (e.g., IPsec or
802.1x)
Granular areas of the security policy, such as the Internet use
policy or the access control policy
The security policy also creates a security baseline that will allow
future gap analysis to be performed to detect new vulnerabilities and
countermeasures. The most important aspects covered by the written
security policies and procedures are as follows:
The first step is to identify and classify the organizational assets and
assign them a quantitative value based on the impact of their loss. The
next step is determining the threats to those assets, because threats
only matter if they can affect specific assets within the company. One
company may assign a higher priority to physical security than to
other security aspects (e.g., protecting against reconnaissance attacks).
Severity
Probability
Control
The three components are used to develop a risk index (RI), which
uses the following formula:
where:
1. 1. Secure
2. 2. Monitor
3. 3. Test
4. 4. Improve
Trust and identity management is another key part of the Cisco Self-
Defending Network initiative. This is a critical aspect for developing
secure network systems. Trust and identity management states who
can access the network, what systems can access the network, when
and where the network can be accessed, and how the access can occur.
It also attempts to isolate infected machines and keep them off the
network by enforcing access control, thus forcing the infected
machines to update their signature databases and their applications.
Trust
Identity
Access control
Devices can be grouped into domains of trust that can have different
levels of segmentation. The identity aspect determines who or what
accesses the network, including users, devices, or other organizations.
The authentication of identity is based on three attributes that make
the connection to access control:
Secure Connectivity
Cisco ASA devices, including the 5500 family (ASA 5510, 5520,
5540, and so on)
Routers using IOS security feature sets that include basic
firewalls, zone-based firewalls, IPS functionality, IPsec VPNs,
DMVPNs, or SSL VPNs for Web-based clients.
Cisco Catalyst switches with firewalls, IDSs, or VPN modules and
other Layer 2 security features (e.g., 802.1x port-based
authentication). The Cisco Catalyst 6500 series switch is a
modular switch that supports a wide variety of service modules
that help enhance network security. Examples of these modules
include Cisco Firewall Services Module (FWSM), Cisco Intrusion
Detection System Services Module (IDSM-2), and the Cisco SSL
Services Module.
Chapter 9 – Summary
Confidentiality
Integrity
Availability
Security policies
Assigned interfaces
NAT rules
ACLs
Administrators
The modern Cisco NAC solution includes the Identity Service Engine
(ISE), which uses Cisco network devices to extend access
enforcement throughout a network and consists of the following
components:
Clientless access
Thin client (port forwarding) access
Full tunnel access
Parallel design
DMZ design
Integrated design
a. Low cost
b. Confidentiality
c. Availability
d. Free upgrades
e. Integrity
a. DoS attacks
b. OOB attacks
c. Spoofing
d. Viruses
e. Routing
f. Trojans
a. TRUE
b. FALSE
a. Spoofing
b. Logging
c. Query
d. Flooding
a. ICMP scanning
b. SNMP scanning
c. Syslog scanning
d. BGP scanning
e. TCP scanning
a. VPN configuration
b. Security policy
c. NAT rules
c. NAT rules
d. SSL VPN
e. Assigned interfaces
a. LLDP
c. Virtual link
a. Isolated
b. Promiscuous
c. Voice
d. Community
e. Independent
Which of the following private VLAN port types is usually
11
connected to a router?
a. Isolated
b. Promiscuous
c. Voice
d. Community
e. Independent
a. SNMP
b. RIP
c. CBAC
d. CEF
b. FALSE
a. SNMP
b. RIP
c. RTP
d. NAC
e. NTP
b. Layer 3 Out-of-Band
c. Layer 4 In-Band
a. ISP
b. ISE
c. IPS
d. ISA
a. NAC framework
b. NAC router
d. NAC appliance
a. 4200/4300
b. 6500
c. 7200
d. 3800
a. Parallel design
b. Inline design
c. Out-of-Band design
d. DMZ design
e. Campus design
Which of the following devices can integrate firewall, VPN, and
21
IPS functionalities?
a. ISR router
b. Catalyst switch
c. ISE
d. ASA device
Chapter 9 – Answers
1 – b, c, e
2 – a, c, d, f
3–a
4–d
5 – a, b, e
6 – b, c, e
7 – a, d, f
8 – b, d, e
9 – a, b, d
10 – b
11 – c
12 – c
13 – b
14 – d
15 – c
16 – b
17 – a, d
18 – a
19 – a, b, e
20 – d