Cisco Certified Devnet Associate
Cisco Certified Devnet Associate
Contents
1. Cover Page
2. About This eBook
3. Title Page
4. Copyright Page
5. About the Authors
6. About the Technical Reviewers
7. Dedications
8. Acknowledgments
9. Contents at a Glance
10. Reader Services
11. Contents
12. Icons Used in This Book
13. Command Syntax Conventions
14. Introduction
1. Getting Ready
2. Tools for Final Preparation
3. Suggested Plan for Final Review/Study
4. Summary
35. Appendix A. Answers to the “Do I Know This Already?” Quiz Questions
36. Appendix B. DevNet Associate DEVASC 200-901 Official Cert Guide
Exam Updates
37. Glossary
38. Index
39. Appendix C. Study Planner
40. Where are the companion content files? - Register
41. Inside Front Cover
42. Inside Back Cover
43. Code Snippets
1. i
2. ii
3. iii
4. iv
5. v
6. vi
7. vii
8. viii
9. ix
10. x
11. xi
Cisco Press
Published by:
Cisco Press
ScoutAutomatedPrintCode
ISBN-13: 978-01-3664296-1
ISBN-10: 01-3664296-9
Trademark Acknowledgments
All terms mentioned in this book that are known to be
trademarks or service marks have been appropriately
capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to
the accuracy of this information. Use of a term in this book
should not be regarded as affecting the validity of any
trademark or service mark.
Special Sales
For information about buying this title in bulk quantities, or
for special sales opportunities (which may include electronic
versions; custom cover designs; and content particular to your
business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at
corpsales@pearsoned.com or (800) 382-3419.
Composition: codeMantra
Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax
numbers are listed on the Cisco Website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco
and/or its affiliates in the U.S. and other countries. To view a list of Cisco
trademarks, go to this URL: www.cisco.com/go/trademarks. Third party trademarks
mentioned are the property of their respective owners. The use of the word partner
does not imply a partnership relationship between Cisco and any other company.
(1110R)
Jason Gooley:
Adrian Iliesiu:
Ashutosh Malegaonkar:
Last but not the least, I sincerely thank Susie Wee for
believing in me and letting me be part of DevNet since the
very early days of DevNet.
This book would not have been written if it hadn’t been for the
team of amazing people at Cisco Press; you guys make us
sound coherent, fix our silly mistakes, and encourage us to get
the project done! James, Ellie, and Brett are the best in the
industry. Thanks as well to our tech editors, John McDonough
and Bryan Byrne, for making sure our code is tight and works.
Jason Gooley:
Adrian Iliesiu:
Ashutosh Malegaonkar:
Thanks to the entire Cisco DevNet team for being the soul of
the program. Adrian—we did it! A special thanks to Susie Wee
for the support and encouragement from day one. This being
the first for me, thanks to Jason and Chris for the mentorship;
Ellie Bru for keeping up with my novice questions; and finally
John McDonough and Bryan Byrne for the excellent technical
reviews.
Braces within brackets ([{ }]) indicate a required choice within an optional
element.
Readers of this book can expect that the blueprint for the
DevNet Associate DEVASC 200-901 exam tightly aligns with
the topics contained in this book. This was by design.
Candidates can follow along with the examples in this book by
utilizing the tools and resources found on the DevNet website
and other free utilities such as Postman and Python.
This book is targeted for all learners who are learning these
topics for the first time, as well as for those who wish to
enhance their network programmability and automation
skillset.
Helping you discover which test topics you have not mastered
Supplying exercises and scenarios that enhance your ability to recall and
deduce the answers to test questions
Note that if you buy the Premium Edition eBook and Practice
Test version of this book from Cisco Press, your book will
automatically be registered on your account page. Simply go
to your account page, click the Registered Products tab, and
select Access Bonus Content to access the book’s companion
website.
Print book: Look in the cardboard sleeve in the back of the book for a piece
of paper with your book’s unique PTP code.
Premium Edition: If you purchase the Premium Edition eBook and Practice
Test directly from the Cisco Press website, the code will be populated on your
account page after purchase. Just log in at www.ciscopress.com, click Account
to see details of your account, and click the Digital Purchases tab.
Amazon Kindle: For those who purchase a Kindle edition from Amazon, the
access code will be supplied directly from Amazon.
Other Bookseller eBooks: Note that if you purchase an eBook version from
any other source, the practice test is not included because other vendors to date
have not chosen to vend the required unique access code.
Once you have the access code, to find instructions about both
the PTP web app and the desktop app, follow these steps:
If you want to use the web app only at this point, just navigate
to www.pearsontestprep.com, establish a free login if you do
not already have one, and register this book’s practice tests
using the registration code you just found. The process should
take only a couple of minutes.
Note
Amazon eBook (Kindle) customers: It is easy to miss
Amazon’s email that lists your PTP access code. Soon after
you purchase the Kindle eBook, Amazon should send an
email. However, the email uses very generic text and makes
no specific mention of PTP or practice exams. To find your
code, read every email from Amazon after you purchase the
book. Also do the usual checks for ensuring your email
arrives, like checking your spam folder.
Chapter 5, “Working with Data in Python”: This chapter covers the various
ways you can input data into your Python program, parse data, and handle
Chapter 11, “Cisco Security Platforms and APIs”: This chapter discusses in
detail Cisco’s Security platforms, their associated APIs along with examples.
Platforms under consideration are Cisco Firepower, Cisco Umbrella, Cisco
Advanced Malware Protection—AMP, Cisco Identity Services Engine—ISE,
and Cisco ThreatGrid.
Chapter 18, “IP Services”: This chapter starts by covering several protocols
and technologies that are critical to networking: DHCP, DNS, NAT, SNMP,
and NTP. The chapter continues with an overview of Layer 2 versus Layer 3
network diagrams and ends with a look at how to troubleshoot application
connectivity issues.
1.8.a Clone 2
1.8.c Commit 2
1.8.d Push/pull 2
1.8.e Branch 2
1.8.g diff 2
4.3.c Containers 13
Cisco DevNet Certifications: This section covers various aspects of the Cisco
Certified DevNet Associate, Professional, and Specialist certifications and
how they fit into the overall Cisco career certification portfolio.
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
FOUNDATION TOPICS
Cloud services
Virtualization
Providing credibility
Building confidence
Increasing salary
Expert
Professional
Associate
Entry
CCNA
Cyber Ops
CCNA
CCNP CCIE
Collabor Collaboration
ation
300-101 ROUTE
300-115 SWITCH
300-135 TSHOOT
Virtualization
Infrastructure
Network assurance
Security
Note
The exams listed in Table 1-4 were available at the time of
publication. Please visit
http://www.cisco.com/go/certifications to keep up on all the
latest available certifications and associated tracks.
CCIE Collaboration
CCIE Security
Cloud developer
Automation engineer
Note
The DevNet Expert certification is a planned offering that
was not available as this book went to press.
Across the top of the main DevNet page, you can see that the
following menu options:
Discover
Technologies
Community
Support
Events
Discover
The Discover page shows the different offerings that DevNet
has available. This page includes the subsection Learning
Tracks; the learning tracks on this page guide you through
various different technologies and associated API labs. Some
of the available labs are Programming the Cisco Digital
Network Architecture (DNA), ACI Programmability, Getting
Started with Cisco WebEx Teams APIs, and Introduction to
DevNet.
Technologies
The Technologies page allows you to pick relevant content
based on the technology you want to study and dive directly
into the associated labs and training for that technology. Figure
1-7 shows some of the networking content that is currently
available in DevNet.
Note
Available labs may differ from those shown in this chapter’s
figures. Please visit http://developer.cisco.com to see the
latest content available and to interact with the current
learning labs.
Community
Perhaps one of the most important section of DevNet is the
Community page, where you have access to many different
people at various stages of learning. You can find DevNet
ambassadors and evangelists to help at various stages of your
learning journey. The Community page puts the latest events
and news at your fingertips. This is also the place to read
blogs, sign up for developer forums, and follow DevNet on all
major social media platforms. This is the safe zone for asking
any questions, regardless of how simple or complex they
might seem. Everyone has to start somewhere. The DevNet
Community page is the place to start for all things Cisco and
network programmability. Figure 1-8 shows some of the
options currently available on the Community page.
Support
On the DevNet Support page you can post questions and get
answers from some of the best in the industry. Technology-
focused professionals are available to answer questions from
both technical and theoretical perspectives. You can ask
questions about specific labs or overarching technologies, such
as Python or YANG models. You can also open a case with the
DevNet Support team, and your questions will be tracked and
answered in a minimal amount of time. This is a great place to
ask the Support team questions and to tap into the expertise of
the Support team engineers. Figure 1-9 shows the DevNet
Support page, where you can open a case. Being familiar with
the options available from a support perspective is key to
understanding the types of information the engineers can help
provide.
Events
The Events page, shown in Figure 1-10, provides a list of all
events that have happened in the past and will be happening in
the future. This is where you can find the upcoming DevNet
Express events as well as any conferences where DevNet will
be present or participating. Be sure to bookmark this page if
you plan on attending any live events. DevNet Express is a
one- to three-day event led by Cisco developers for both
customers and partners. Attending one of these events can help
you with peer learning and confidence as well as with honing
your development skills.
Walk
Run
Fly
When searching the use case library, you can search using the
Walk, Run, or Fly categories as well as by type of use case. In
addition, you can find use cases based on the automation
lifecycle stage or the place in the network, such as data center,
campus, or collaboration. Finally, you can simply choose the
product for which you want to find use cases, such as IOS XE,
Cisco DNA Center, or ACI (see Figure 1-12).
Linux BASH: This section covers key aspects of the Linux BASH shell and
how to use it.
Software Version Control: This section includes the use of version control
systems in software development.
Git: This section discusses the use of the Git version control system.
Conducting Code Review: This section discusses using peer review to check
the quality of software.
Linux BASH 5, 6
Git 8–10
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
1. What is Waterfall?
1. A description of how blame flows from management on failed software
projects
2. A type of SDLC
3. A serial approach to software development that relies on a fixed scope
4. All of the above
2. What is Agile?
1. A form of project management for Lean
2. An implementation of Lean for software development
3. A strategy for passing the CCNA DevNet exam
4. A key benefit of automation in infrastructure
FOUNDATION TOPICS
SOFTWARE DEVELOPMENT
LIFECYCLE
Anyone can program. Once you learn a programming
language’s syntax, it’s just a matter of slapping it all together
to make your application do what you want it to do, right? The
reality is, software needs to be built using a structure to give it
sustainability, manageability, and coherency. You may have
heard the phrase “cowboy coding” to refer to an unstructured
Stage 1—Planning: Identify the current use case or problem the software is
intended to solve. Get input from stakeholders, end users, and experts to
determine what success looks like. This stage is also known as requirements
analysis.
Stage 3—Designing: In this phase, you turn the software specifications into a
design specification. This is a critical stage as stakeholders need to be in
agreement in order to build the software appropriately; if they aren’t, users
won’t be happy, and the project will not be successful.
Stage 5—Testing: Does the software work as expected? In this stage, the
programmers check for bugs and defects. The software is continually
examined and tested until it successfully meets the original software
specifications.
Stage 6—Deployment: During this stage, the software is put into production
for the end users to put it through its paces. Deployment is often initially done
in a limited way to do any final tweaking or detect any missed bugs. Once the
user has accepted the software and it is in full production, this stage morphs
into maintenance, where bug fixes and software tweaks or smaller changes are
made at the request of the business user.
Note
There are quite a few SDLC models that further refine the
generic process just described. They all use the same core
concepts but vary in terms of implementation and utility for
different projects and teams. The following are some of the
most popular SDLC models:
Waterfall
Lean
Agile
Iterative model
Spiral model
V model
Prototyping models
Luckily, you don’t need to know all of these for the 200-901
DevNet Associate DEVASC exam. The following sections
cover the ones you should know most about: Waterfall, Lean,
and Agile.
Waterfall
Lean
Elimination of waste: If something doesn’t add value to the final product, get
rid of it. There is no room for wasted work.
Just-in-time: Don’t build something until the customer is ready to buy it.
Excess inventory wastes resources.
Agile
These core tenets were the main spark of the Agile movement.
Mary Poppendieck and Tom Poppendieck wrote Lean
Software Development: An Agile Toolkit in 2003, based on the
principles of the Agile Manifesto and their many years of
experience developing software. This book is still considered
one of the best on the practical uses of Agile.
Note
Numerous web frameworks use MVC concepts across many
programming languages. Angular, Express, and Backbone
are all written in JavaScript. Django and Flask are two very
popular examples written in Python.
View: The view is what the end users see on the devices they are using to
interact with the program. It could be a web page or text from the command
line. The power of the view is that it can be tailored to any device and any
representation without changing any of the business logic of the model. The
view communicates with the controller by sending data or receiving output
from the model through the controller. The view’s primary function is to
render data.
Controller: The controller is the intermediary between what the user sees and
the backend logic that manipulates the data. The role of the controller is to
receive requests from the user via the view and pass those requests on to the
model and its underlying data store.
Observer Pattern
Subject: The subject refers to the object state being observed—in other words,
the data that is to be synchronized. The subject has a registration process that
allows other components of an application or even remote systems to subscribe
to the process. Once registered, a subscriber is sent an update notification
whenever there is a change in the subject’s data so that the remote systems can
synchronize.
Observer: The observer is the component that registers with the subject to
allow the subject to be aware of the observer and how to communicate to it.
The only function of the observer is to synchronize its data with the subject
when called. The key thing to understand about the observer is that it does not
use a polling process, which can be very inefficient with a larger number of
observers registered to a subject. Updates are push only.
LINUX BASH
Knowing how to use Linux BASH is a necessary skill for
working with open-source technologies as well as many of the
tools you need to be proficient with to be successful in the
development world. Linux has taken over the development
world, and even Microsoft has jumped into the game by
providing the Windows Subsystem for Linux for Windows 10
pro. For the DEVASC exam, you need to know how to use
BASH and be familiar with some of the key commands.
While there are many shells you can use, BASH, which stands
$ man man
man(1)
man(1)
NAME
SYNOPSIS
man [-acdfFhkKtwW] [--path] [-m
system] [-p string] [-C config_file]
[-M pathlist] [-P pager] [-B browser] [-
H htmlpager] [-S section_list]
[section] name ...
DESCRIPTION
man formats and displays the on-line
manual pages. If you specify sec-
tion, man only looks in that section of
the manual. name is normally
the name of the manual page, which is
typically the name of a command,
function, or file. However, if name
contains a slash (/) then man
interprets it as a file specification,
so that you can do man ./foo.5
or even man /cd/foo/bar.1.gz.
import json
import urllib.request
from pprint import pprint
def get_local_weather():
weather_base_url =
'http://forecast.weather.gov/MapClick.php?
FcstType=json&'
places = {
'Austin': ['30.3074624',
'-98.0335911'],
'Portland': ['45.542094',
'-122.9346037'],
'NYC': ['40.7053111', '-74.258188']
}
page_response =
urllib.request.urlopen(weather_url).read()
<output cut for brevity>
Directory Navigation
cd
The cd command is used to change directories and move
around the file system. You can use it as follows:
pwd
If you ever get lost while navigating around the file system,
you can use the pwd command to print out your current
working directory path. You can use it as follows:
ls
Once you have navigated to a directory, you probably want to
know what is in it. The ls command gives you a list of the
current directory. If you execute it without any parameters, it
just displays whatever is in the directory. It doesn’t show any
hidden files (such as configuration files). Anything that starts
mkdir
To create a directory, you use the mkdir command. If you are
in your home directory or in another directory where you have
the appropriate permissions, you can use this command
without sudo. You can use the mkdir command as follows:
File Management
Working with files is easy with BASH. There are just a few
commands that you will use often, and they are described in
the following sections.
cp
The purpose of the cp command is to copy a file or folder
someplace. It does not delete the source file but instead makes
an identical duplicate. When editing configuration files or
making changes that you may want to roll back, you can use
the cp command to create a copy as a sort of backup. The
command requires several parameters: the name of the file you
want to copy and where you want to copy it to and the name of
the copy. When copying the contents of a folder, you need to
use the -r, or recursive, flag. You can use the cp command as
follows:
mv
rm
touch
The touch command is used to create a file and/or change the
timestamps on a file’s access without opening it. This
command is often used when a developer wants to create a file
but doesn’t want to put any content in it. You can use the
touch command as follows:
cat
$cat file1.txt Displays the contents of file1.txt and pipes the output to
| more more to add page breaks
Environment Variables
BASH environment variables contain information about the
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/App
lications/VMware
Fusion.app/Contents/Public:/opt/X11/bin
$ export PATH=$PATH:/Home/chrijack/bin
When you end your terminal session, the changes you made
are not saved. To retain the changes, you need to write the path
statement to your .bashrc (or .zshrc if using Z shell) profile
settings. Anything written here will be available anytime you
launch a terminal. You can simply copy, add the previous
command to the end of the .bashrc with your favorite text
editor, or use the following command:
This addition becomes active only after you close your current
session or force it to reload the variable. The source command
can be used to reload the variables from the hidden
configuration file .bashrc:
$ . ~/.bashrc
There are many more tricks you can uncover with BASH. You
will get plenty of chances to use it as you study for the
DEVASC exam.
GIT
If you are working with version control software, chances are
it is Git. A staggering number of companies use Git, which is
free and open source. In 2005, Linus Torvalds (the father of
Linux) created Git as an alternative to the SCM system
BitKeeper, when the original owner of BitKeeper decided to
stop allowing free use of the system for Linux kernel
development. With no existing open-source options that would
meet his needs, Torvalds created a distributed version control
system and named it Git. Git was created to be fast and
scalable, with a distributed workflow that could support the
huge number of contributors to the Linux kernel. His creation
was turned over to Junio Hamano in 2006, and it has become
the most widely used source management system in the world.
Note
GitHub is not Git. GitHub is a cloud-based social
networking platform for programmers that allows anyone to
share and contribute to software projects (open source or
private). While GitHub uses Git as its version control
system underneath the graphical front end, it is not directly
tied to the Git open-source project (much as a Linux
distribution, such as Ubuntu or Fedora, uses the Linux
kernel but is independently developed from it).
Understanding Git
Local workspace: This is where you store source code files, binaries, images,
documentation, and whatever else you need.
Head, or local repository: This is where you store all committed items.
Untracked: When you first create a file in a directory that Git is managing, it
is given an untracked status. Git sees this file but does not perform any type of
version control operations on it. For all intents and purposes, the file is
invisible to the rest of the world. Some files, such as those containing settings
or passwords or temporary files, may be stored in the working directory, but
you may not want to include them in version control. If you want Git to start
tracking a file, you have to explicitly tell it to do so with the git add command;
once you do this, the status of the file changes to tracked.
Staged: Once a changed file is added to the index, Git needs to be able to
bundle up your changes and update the local repository. This process is called
staging and is accomplished through git commit. At this point, your file status
is moved back to the tracked status, and it stays there until you make changes
to the file in the future and kick off the whole process once again.
If at any point you want to see the status of a file from your
repository, you can use the extremely useful command git
status to learn the status of each file in your local directory.
You can pull files and populate your working directory for a
project that already exists by making a clone. Once you have
done this, your working directory will be an exact match of
what is stored in the repository. When you make changes to
any source code or files, you can add your changes to the
index, where they will sit in staging, waiting for you to finish
all your changes or additions. The next step is to perform a
commit and package up the changes for submission (or
Using Git
Git may not come natively with your operating system. If you
are running a Linux variation, you probably already have it.
For Mac and Windows you need to install it. You can go to the
main distribution website (https://git-scm.com) and download
builds for your operating system directly. You can install the
command-line version of Git and start using and practicing the
Cloning/Initiating Repositories
Git operates on a number of processes that enable it to do its
magic. The first of these processes involves defining a local
repository by using either git clone or git init. The git clone
command has the following syntax:
#git clone
https://github.com/CiscoDevNet/pyats-coding-
101.git
Cloning into 'pyats-coding-101'...
remote: Enumerating objects: 71, done.
remote: Total 71 (delta 0), reused 0 (delta 0),
pack-reused 71
Unpacking objects: 100% (71/71), done.
#cd pyats-coding-101
#pyats-coding-101 git:(master) ls
COPYRIGHT coding-102-parsers
LICENSE coding-103-
yaml
README.md coding-201-
advanced-parsers
coding-101-python
git init (directory name)
#newrepo git:(master) ls
newfile
Once the file is added, Git sees that there is something new,
but it doesn’t do anything with it at this point. If you type git
status, you can see that Git identified the new file, but you
have to issue another command to add it to index for Git to
perform version control on it. Here’s an example:
# git status
On branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what
will be committed)
Git is helpful and tells you that it sees the new file, but you
need to do something else to enable version control and let Git
know to start tracking it.
When you are finished making changes to files, you can add
them to the index. Git knows to then start tracking changes for
the files you identified. You can use the following commands
to add files to an index:
The git add command adds all new or deleted files and
directories to the index. Why select an individual file instead
of everything with the . or -A option? It comes down to being
specific about what you are changing and adding to the index.
If you accidently make a change to another file and commit
everything, you might unintentionally make a change to your
code and then have to do a rollback. Being specific is always
safest. You can use the following commands to add the file
newfile to the Git index (in a process known as staging):
On branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
# git add .
# ls
newfile removeme.py
# git rm -f removeme.py
rm 'removeme.py'
# ls
oldfile.py
Committing Files
When you commit a file, you move it from the index or
staging area to the local copy of the repository. Git doesn’t
send entire updates; it sends just changes. The commit
command is used to bundle up those changes to be
synchronized with the local repository. The command is
simple, but you can specify a lot of options and tweaks. In its
simplest form, you just need to type git commit. This
command has the following syntax:
The -a option tells Git to add any changes you make to your
files to the index. It’s a quick shortcut instead of using git add
-A, but it works only for files that have been added at some
point before in their history; new files need to be explicitly
added to Git tracking. For every commit, you will need to
enter some text about what changed. If you omit the -m
option, Git automatically launches a text editor (such as vi,
which is the default on Linux and Mac) to allow you to type in
the text for your commit message. This is an opportunity to
describe the changes you made so others know what you did.
It’s tempting to type in something silly like “update” or “new
change for a quick commit,” but don’t fall into that trap. Think
about the rest of your team. Here is an example of the commit
command in action:
Note
As a good practice, use the first 50 characters of the commit
message as a title for the commit followed by a blank line
and a more detailed explanation of the commit. This title can
be used throughout Git to automate notifications such as
sending an email update on a new commit with the title as
the subject line and the detailed message as the body.
Up until this point in the chapter, you have seen how Git
operates on your local computer. Many people use Git in just
this way, as a local version control system to track documents
and files. Its real power, however, is in its distributed
architecture, which enables teams from around the globe to
come together and collaborate on projects.
When using the git init command, however, you need to make
https://github.com/chrijack/devnetccna.git
# git remote -v
origin
https://github.com/chrijack/devnetccna.git
(fetch)
origin
https://github.com/chrijack/devnetccna.git (push)
In order for your code to be shared with the rest of your team
or with the rest of the world, you have to tell Git to sync your
local repository to the remote repository (on a shared server or
service like GitHub). The command git push, which has the
following syntax, is useful in this case:
To https://github.com/chrijack/devnetccna.git
The command git pull syncs any changes that are on the
remote repository and brings your local repository up to the
same level as the remote one. It has the following syntax:
#git log
commit 40aaf1af65ae7226311a01209b62ddf7f4ef88c2
(HEAD -> master, origin/master)
Author: Chris Jackson <chrijack@cisco.com>
Date: Sat Oct 19 00:00:34 2019 -0500
commit 1a9db03479a69209bf722b21d8ec50f94d727e7d
Author: Chris Jackson <chrijack@cisco.com>
Date: Fri Oct 18 23:59:55 2019 -0500
commit 8eb16e3b9122182592815fa1cc029493967c3bca
Author: Chris Jackson <chrijack@me.com>
Date: Fri Oct 18 20:03:32 2019 -0500
first commit
# git branch
* master
newfeature
Now you have a separate workspace where you can build your
feature. At this point, you will want to perform a git push to
sync your changes to the remote repository. When the work is
finished on the branch, you can merge it back into the main
code base and then delete the branch by using the command
git branch -d (branchname).
Merging Branches
In the newfeature branch, this text file has been modified with
some new feature code. Figure 12-16 shows a simple change
made to the text file.
#git add .
Now the branch is synced with the new changes, and you can
switch back to the master branch with the following command:
From the master branch, you can then issue the git merge
command and identify the branch to merge with (in this case,
the newfeature branch):
Fast-forward
text1 | 1 +
1 file changed, 1 insertion(+)
Handling Conflicts
Auto-merging text1
Git shows you that “line 3” was added to text1 on the master
branch and “new feature code” was added to text1 on the
newfeature branch. Git is letting you delete one or keep both.
You can simply edit the file, remove the parts that Git added to
highlight the differences, and save the file. Then you can use
git add to index your changes and git commit to save to the
local repository, as in the following example:
#git add .
The diff command takes two sets of inputs and outputs the
differences or changes between them. This is its syntax:
git diff: This command highlights the differences between your working
git diff --cached: This command shows any changes between the index and
your last commit.
git diff HEAD: This command shows the differences between your most
recent commit and your current working directory. It is very useful for seeing
what will happen with your next commit.
index 0000000..b9997e5
--- /dev/null
+++ b/text2
@@ -0,0 +1 @@
+new bit of code
git diff identified the new file addition and shows the a/b
comparison. Since this is a new file, there is nothing to
compare it with, so you see --- /dev/null as the a comparison.
In the b comparison, you see +++ b/text2, which shows the
addition of the new file, followed by stacks on what was
different. Since there was no file before, you see -0,0 and +1.
(The + and - simply denote which of the two versions you are
comparing. It is not actually a -0, which would be impossible.)
The last line shows the text that was added to the new file.
This is a very simple example with one line of code. In a big
file, you might see a significant amount of text.
+++ b/text1
@@ -1,3 +1,4 @@
line 1
line 2
+line 3
It can help you find more defects and inefficient code that unit tests and
functional tests might miss, making your software more reliable.
Review the code, not the person who wrote it. Avoid being robotic and harsh
so you don’t hurt people’s feeling and discourage them. The goal is better
code, not disgruntled employees.
Keep in mind that code review is a gift. No one is calling your baby ugly.
Check your ego at the door and listen; the feedback you receive will make you
a better coder in the long run.
Make sure the changes recommended are committed back into the code base.
You should also share findings back to the organization so that everyone can
learn from mistakes and improve their techniques.
Paragraph Waterfall 27
Paragraph Lean 28
Paragraph Agile 29
Introduction to Python
This chapter covers the following topics:
Getting Started with Python: This section covers what you need to know
when using Python on your local machine.
Data Types and Variables: This section describes the various types of data
you need to interact with when coding.
Input and Output: This section describes how to get input from a user and
print out results to the terminal.
Flow Control with Conditionals and Loops: This section discusses adding
logic to your code with conditionals and loops.
FOUNDATION TOPICS
Note
Why would a Mac have such an old version of Python?
Well, that’s a question for Apple to answer, but from a
community standpoint, the move to version 3 historically
was slow to happen because many of the Python extensions
(modules) where not updated to the newer version. If you
run across code for a 2.x version, you will find differences
in syntax and commands (also known as the Python
standard library) that will prevent that code from running
under 3.x. Python is not backward compatible without
modifications. In addition, many Python programs require
additional modules that are installed to add functionality to
Python that aren’t available in the standard library. If you
have a program that was written for a specific module
version, but you have the latest version of Python installed
# MacOS or Linux
source myvenv/bin/activate
# Windows
C:\myvenv\Scripts\activate.bat
(myvenv)$
To install new modules for Python, you use pip, which pulls
modules down from the PyPI repository. The command to load
new modules is as follows:
The pip command also offers a search function that allows you
to query the package index:
ansible==2.6.3
black==19.3b0
flake8==3.7.7
genie==19.0.1
ipython==6.5.0
napalm==2.4.0
ncclient==0.6.3
netmiko==2.3.3
pyang==1.7.5
pyats==19.0
PyYAML==5.1
requests==2.21.0
urllib3==1.24.1
virlutils==0.8.4
xmltodict==0.12.0
UNDERSTANDING PYTHON
SYNTAX
The word syntax is often used to describe structure in a
language, and in the case of programming syntax, is used in
much the same way. Some programming languages are very
strict about how you code, which can make it challenging to
get something written. While Python is a looser language than
some, it does have rules that should be followed to keep your
code not only readable but functional. Keep in mind that
Python was built as a language to enhance code readability
and named after Monty Python (the British comedy troop)
because the original architects of Python wanted to keep it fun
and uncluttered. Python is best understood through its core
philosophy (The Zen of Python):
$ python3
This code will generate a syntax error the minute you try to
run it. Python is expecting to see indentation on the line after
the :. If you insert four spaces before the print() statement, the
code works:
Python allows you to use spaces or tabs. You can use both
spaces and tabs in Python 2, but Python 3 will return a syntax
error; however, if you use both tabs and spaces, you might end
up with really weird issues that you need to troubleshoot. The
standard for Python from the PEP 8 style guide is to use four
spaces of indentation before each block of code. Why four
spaces? Won’t one space work? Yes, it will, but your code
blocks will be hard to align, and you will end up having to do
extra work.
''' This is
line 2
and line 3'''
Variables
Assigning a variable in Python is very straightforward. Python
auto types a variable, and you can reassign that same variable
to another value of a different type in the future. (Try doing
that in C!) You just need to remember the rules for variable
names:
To assign a variable you just set the variable name equal to the
value you want, as shown in these examples:
Data Types
Everything in Python is an object, and depending on the type
of object, there are certain characteristics that you must be
aware of when trying to determine the correct action you can
perform on them. In Python, whenever you create an object,
you are assigning that object an ID that Python uses to recall
what is being stored. This mechanism is used to point to the
memory location of the object in question and allows you to
perform actions on it, such as printing its value. When the
object is created, it is assigned a type that does not change.
This type is tied to the object and determines whether it is a
string, an integer, or another class.
Within these types, you are allowed to either change the object
(mutable) or are not allowed to change the object (immutable)
after it has been created. This doesn’t mean that variables are
not able to be changed; it means that most of the basic data
Table 3-2 lists the most commonly used Python data types.
The rest of this section covers them in more detail.
>>> 5 * 6 - 1
29
>>> 5 * (6 - 1)
25
>>> 10 / 7
1.4285714285714286
If you just want to see whole numbers, you can use integer
division and lop off the remainder, as shown here:
>>> 10 // 7
1
>>> 10 % 7
You have the option to use other base systems instead of just
the default base 10. You have three choices in addition to base
10: binary (base 2), octal (base 8), and hex (base 16). You need
to use prefixes before integers in order for Python to
understand that you are using a different base:
0b or 0B for binary
0o or 0O for octal
0x or 0X for hex
>>> 0xbadbeef
195935983
You can also convert back and forth by using the base
keyword in front of the value you want to exchange, as in
these examples:
>>> hex(195935983)
'0xbadbeef'
'0b1011101011011011111011101111'
Booleans
A Boolean has only two possible values, True and False. You
use comparison operators to evaluate between two Boolean
objects in Python. This data type is the foundation for
constructing conditional steps and decisions within programs.
Table 3-4 shows the various Boolean comparison operators
and some examples of how to use them.
Strings
>>> '10' + 1
to str
This error tells you that you have to convert the string to an
integer or another data type to be able to use it as a number (in
a math formula, for example). The int() function can convert a
string value into an integer for you, as shown in this example:
>>> int('10') + 1
11
>>> a='DevNet'
>>> a[0]
'D'
You can also specify ranges to print. The colon operator gives
you control over whole sections of a string. The first number is
the beginning of the slice, and the second number determines
the end. The second number may be confusing at first because
it is intended to identify “up to but not including” the last
character. Consider this example:
>>> a[0:3]
'Dev'
>>> a[0:6]
'DevNet'
>>> a[:2]
'De'
If you omit the second value, Python prints to the end of the
string, as in this example:
>>> a[2:]
'vNet'
>>> a[-2:]
'et'
>>> a[:-2]
'DevN'
Lists
emptylist = []
emptylist2 = list()
>>> print(kids[1])
Sydney
Unlike strings, lists are mutable objects, which means you can
change parts of the list at will. With a string, you can’t change
parts of the string without creating a new string. This is not the
case with lists, where you have a number of ways to make
changes. If you have a misspelling, for example, you can
change just one element of the list, leaving the rest untouched,
as in this example:
>>> kids
['Caleb', 'Sidney', 'Savannah']
>>> kids[1]="Sydney"
>>> kids
['Caleb', 'Sydney', 'Savannah']
>>>
>>> a = [1, 2, 4]
>>> b = [4, 5, 6]
Remember all of the slicing you saw with strings? The same
principles apply here, but instead of having a single string with
each letter being in a bucket, the elements in the list are the
items in the bucket. Don’t forget the rule about the second
number after the colon, which means “up to but not
including.” Here is an example:
>>> c[1:4]
[2, 3, 4]
>>> c[ :-4]
[1, 2, 3, 4, 5, 6]
>>> c[:]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
list.extend( Adds the elements of a list to the end of the current list
alist)
Tuples
>>> person
>>> person[0]
2012
>>> person[0]=15
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
assignment
>>> c
18
Dictionaries
Keys: A dictionary’s keys are limited to only using immutable values (int,
float, bool, str, tuple, and so on). No, you can’t use a list as a key, but you can
use a tuple, which means you could use a tuple as a key (immutable) but you
can’t use a list as a key (mutable).
To create a dictionary, you use braces and your key and value
separated by a colon. You separate multiple items with
commas. Here’s an example:
>>> type(cabinet)
<class 'dict'>
>>> cabinet["scores"]
(98, 76, 95)
>>> cabinet["company"]
'Cisco'
>>> cabinet["address"]
Sets
A set in Python consists of an unordered grouping of data and
is defined by using the curly braces of a dictionary, without the
key:value pairs. Sets are mutable, and you can add and remove
items from the set. You can create a special case of sets called
a frozen set that makes the set immutable. A frozen set is often
used as the source of keys in a dictionary (which have to be
immutable); it basically creates a template for the dictionary
structure. If you are familiar with how sets work in
mathematics, the various operations you can perform on
mutable sets in Python will make logical sense. To define a
set, do the following:
To check that these are indeed sets, use the type() function:
<class 'set'>
{1, 5}
in F: '))
>>> inpt
83.5
Hello World
>>> print('Hello\nWorld')
Hello
World
\\: Backslash
\b: Backspace
\t: Tab
sep='' )
Numbers in set 1: {1, 2, 4, 5, 6, 8, 10}
for: A for loop is a counting loop that can iterate through data a specific
number of times.
while: The while loop can iterate forever when certain conditions are met.
If Statements
An if statement starts with an if and then sets up a comparison
to determine the truth of the statement it is evaluating and
ending with a : to tell Python to expect the clause (the action if
the condition is true) block of code next. As mentioned earlier
in this chapter, whitespace indenting matters very much in
Python. The clause of an if statement must be indented (four
spaces is the standard) from the beginning of the if statement.
The following example looks for a condition where the
variable n is equal to 5 and prints a message to the console
indicating that the number is indeed a 5:
>>> if n == 20:
...
The number is 20
The Python interpreter uses three dots to let you continue the
clause for the if statement. Notice that there is space between
the start of the dots and the print() statement. Without these
four spaces, Python would spit back a syntax error like this:
>>> if n == 20:
... print('oops')
File "<stdin>", line 2
print('oops')
^
IndentationError: expected an indented block
>>> n = 3
>>> if n == 17:
...
Number is less than 10
For Loops
The for statement allows you to create a loop that continues to
iterate through the code a specific number of times. It is also
referred to as a counting loop and can work through a
sequence of items, such as a list or other data objects. The for
loop is heavily used to parse through data and is likely to be
your go-to tool for working with data sets. A for loop starts
with the for statement, followed by a variable name (which is
a placeholder used to hold each sequence of data), the in
keyword, some data set to iterate through, and then finally a
closing colon, as shown in this example:
>>> dataset=(1,2,3,4,5)
... print(variable)
...
1
2
The for loop continues through each item in the data set, and
in this example, it prints each item. You can also use the
range() function to iterate a specific number of times. The
range() function can take arguments that let you choose what
number it starts with or stops on and how it steps through each
one. Here is an example:
... print(x)
...
0
1
...
... print(x)
...
1
10
While Loops
Whereas the for loop counts through data, the while loop is a
conditional loop, and the evaluation of the condition (as in if
statements) being true is what determines how many times the
loop executes. This difference is huge in that it means you can
specify a loop that could conceivably go on forever, as long as
the loop condition is still true. You can use else with a while
loop. An else statement after a while loop executes when the
condition for the while loop to continue is no longer met.
Example 3-3 shows a count and an else statement.
>>> count = 1
while True:
string = input('Enter some text to print.
\nType "done" to quit> ')
if string == 'done' :
break
print(string)
print('Done!')
Paragraph Strings 70
ADDITIONAL RESOURCES
Python Syntax:
https://www.w3schools.com/python/python_syntax.asp
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
7. What is a method?
1. A variable applied to a class
2. Syntax notation
3. A function within a class or an object
4. Something that is not used in a class
FOUNDATION TOPICS
Must not be a reserved Python word, a built-in function (for example, print(),
input(), type()), or a name that has already been used as a function or variable
print('Simple function')
>>> devnet()
Simple function
This function prints out the string “Simple function” any time
you call it with devnet(). Notice the indented portion that
begins on the next line after the colon. Python expects this
indented portion to contain all the code that makes up the
function. Keep in mind that whitespace matters in Python. The
three single quotation marks that appear on the first line of the
indented text of the function are called a docstring and can be
used to describe what the function does.
>>> help(devnet)
devnet()
prints simple function
50
return result
-5
-5
Hello Caleb !
Hello Sydney !
Hello Savannah !
kwarg3='Savannah')
Hello Caleb !
Hello Sydney !
Hello Savannah !
You can also supply a default value argument in case you have
an empty value to send to a function. By defining a function
with an assigned key value, you can prevent an error. If the
value in the function definition is not supplied, Python uses the
default, and if it is supplied, Python uses what is supplied
when the function is called and then ignores the default value.
Consider this example:
OBJECT-ORIENTED
PROGRAMMING AND PYTHON
Python was developed as a modern object-oriented
programming (OOP) language. Object-oriented programming
is a computer programming paradigm that makes it possible to
describe real-world things and their relationships to each other.
If you wanted to describe a router in the physical world, for
example, you would list all its properties, such as ports,
software versions, names, and IP addresses. In addition, you
might list different capabilities or functions of the router that
you would want to interact with. OOP was intended to model
these types of relationships programmatically, allowing you to
create an object that you can use anywhere in your code by
just assigning it to a variable in order to instantiate it.
PYTHON CLASSES
In Python, you use classes to describe objects. Think of a class
as a tool you use to create your own data structures that
contain information about something; you can then use
functions (methods) to perform operations on the data you
describe. A class models how something should be defined
and represents an idea or a blueprint for creating objects in
Python.
Creating a Class
pass
class Router:
'''Router Class'''
def __init__(self, model, swversion, ip_add):
'''initialize values'''
self.model = model
self.swversion = swversion
self.ip_add = ip_add
>>> rtr1.model
'iosV'
>>> rtr1.desc
'virtual router'
'10.10.10.5')
>>> rtr2.model
'isr4221'
Methods
Attributes describe an object, and methods allow you to
interact with an object. Methods are functions you define as
part of a class. In the previous section, you created an object
and applied some attributes to it. Example 4-1 shows how you
can work with an object by using methods. A method that
allows you to see the details hidden within an object without
typing a bunch of commands over and over would be a useful
method to add to a class. Building on the previous example,
Example 4-1 adds a new function called getdesc() to format
and print the key attributes of your router. Notice that you pass
self to this function only, as self can access the attributes
applied during initialization.
def getdesc(self):
'''return a formatted description of
the router'''
desc = f'Router Model :
{self.model}\n'\
f'Software Version :
{self.swversion}\n'\
f'Router Management Address:
{self.ip_add}'
return desc
Rtr1
Rtr2
Inheritance
class Router:
'''Router Class'''
def __init__(self, model, swversion,
ip_add):
'''initialize values'''
self.model = model
self.swversion = swversion
self.ip_add = ip_add
def getdesc(self):
'''return a formatted description of
the router'''
desc = (f'Router Model :
{self.model}\n'
f'Software Version :
{self.swversion}\n'
f'Router Management Address:
{self.ip_add}')
return desc
class Switch(Router):
def getdesc(self):
'''return a formatted description of
the switch'''
desc = (f'Switch Model :
{self.model}\n'
f'Software Version :
{self.swversion}\n'
f'Switch Management Address:
{self.ip_add}')
return desc
You can add another variable named sw1 and instantiate the
Switch class just as you did the Router class, by passing in
attributes. If you create another print statement using the
newly created sw1 object, you see the output shown in
Example 4-3.
Rtr1
Router Model :iosV
Software Version :15.6.7
Router Management Address:10.10.10.1
Rtr2
Router Model :isr4221
Software Version :16.9.5
Router Management Address:10.10.10.5
Sw1
Switch Model :Cat9300
Software Version :16.9.5
Switch Management Address:10.10.10.8
Code reusability: Modules allow for easy reusability of your code, which
saves you time and makes it possible to share useful code.
Collaboration: You often need to work with others as you build functional
There are a few different ways you can use modules in Python.
The first and easiest way is to use one of the many modules
that are included in the Python standard library or install one
of thousands of third-party modules by using pip. Much of the
functionality you might need or think of has probably already
been written, and using modules that are already available can
save you a lot of time. Another way to use modules is to build
them in the Python language by simply writing some code in
your editor, giving the file a name, and appending a .py
extension. Using your own custom modules does add a bit of
processing overhead to your application, as Python is an
interpreted language and has to convert your text into
machine-readable instructions on the fly. Finally, you can
program a module in the C language, compile it, and then add
its capabilities to your Python program. Compared to writing
your own modules in Python, this method results in faster
runtime for your code, but it is a lot more work. Many of the
third-party modules and those included as part of the standard
library in Python are built this way.
Importing a Module
All modules are accessed the same way in Python: by using
the import command. Within a program—by convention at
the very beginning of the code—you type import followed by
the module name you want to use. The following example uses
the math module from the standard library:
>>> dir(math)
After you import a module, you can use the dir() function to
get a list of all the methods available as part of the module.
The ones in the beginning with the __ are internal to Python
and are not generally useful in your programs. All the others,
however, are functions that are now available for your
program to access. As shown in Example 4-4, you can use the
help() function to get more details and read the documentation
on the math module.
>>> help(math)
Help on module math:
NAME
math
DESCRIPTION
This module provides access to the
mathematical functions
defined by the C standard.
FUNCTIONS
acos(x, /)
Return the arc cosine (measured in
radians) of x.
acosh(x, /)
Return the inverse hyperbolic cosine of
x.
asin(x, /)
Return the arc sine (measured in
radians) of x.
-Snip for brevity-
sqrt(x, /)
If you want to get a square root of a number, you can use the
sqrt() method by calling math.sqrt and passing a value to it,
as shown here:
>>> math.sqrt(15)
3.872983346207417
You have to type a module’s name each time you want to use
one of its capabilities. This isn’t too painful if you’re using a
module with a short name, such as math, but if you use a
module with a longer name, such as the calendar module, you
might wish you could shorten the module name. Python lets
you do this by adding as and a short version of the module
name to the end of the import command. For example, you
can use this command to shorten the name of the calendar
module to cal.
Now you can use cal as an alias for calendar in your code, as
shown in this example:
February 2020
Mo Tu We Th Fr Sa Su
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29
3.872983346207417
As you can see here, you can import more than one method by
separating the methods you want with commas.
Notice that you no longer have to use math.sqrt and can just
call sqrt() as a function, since you imported only the module
functions you needed. Less typing is always a nice side
benefit.
['', '/Users/chrijack/Documents',
'/Library/Frameworks/Python.
framework/Versions/3.8/lib/python38.zip',
'/Library/Frameworks/
Python.framework/Versions/3.8/lib/python3.8',
'/Library/Frameworks/
Python.framework/Versions/3.8/lib/python3.8/lib-
dynload', '/
Library/Frameworks/Python.framework/Versions/3.8/
lib/python3.8/
site-packages']
If you remove the class from the code shown in Example 4-2
and store it in a separate file named device.py, you can import
the classes from your new module and end up with the
following program, which is a lot more readable while still
operating exactly the same:
When you execute this program, you get the output shown in
Example 4-5. If you compare these results with the results
shown in Example 4-3, you see that they are exactly the same.
Therefore, the device module is just Python code that is stored
in another file but used in your program.
Rtr1
Router Model :iosV
Software Version :15.6.7
Router Management Address:10.10.10.1
Rtr2
Router Model :isr4221
Software Version :16.9.5
Router Management Address:10.10.10.5
Sw1
Switch Model :Cat9300
Software Version :16.9.5
Router Management Address:10.10.10.8
pprint: The pretty print module is a more intelligent print function that
makes it much easier to display text and data by, for example, aligning
data for better readability. Use the following command to import this
module:
sys: This module allows you to interact with the Python interpreter and
manipulate and view values. Use the following command to import this
module:
import sys
os: This module gives you access to the underlying operating system
environment and file system. It allows you to open files and interact with
OS variables. Use the following command to import this module:
import os
datetime: This module allows you to create, format, and work with
calendar dates and time. It also enables timestamps and other useful
import datetime
time: This module allows you to add time-based delays and clock
capabilities to your Python apps. Use the following command to import
this module:
import time
import xmltodict
import csv
import json
PyYAML: This module converts YAML files to Python objects that can
be converted to Python dictionaries or lists. Use the following command
to install this module:
import yaml
pyang: This isn’t a typical module you import into a Python program. It’s
a utility written in Python that you can use to verify your YANG models,
create YANG code, and transform YANG models into other data
structures, such as XSD (XML Schema Definition). Use the following
command to install this module:
requests: This is a full library to interact with HTTP services and used
extensively to interact with REST APIs. Use the following command to
install this module:
import requests
import pysnmp
Automation tools:
import napalm
Testing tools:
import unittest
pyats: This module was a gift from Cisco to the development community.
Originally named Genie, it was an internal testing framework used by
Cisco developers to validate their code for Cisco products. pyats is an
incredible framework for constructing automated testing for infrastructure
as code. Use the following command to install this module:
Paragraph Inheritance 94
Parsing Data: This section discusses how to parse data into native Python
objects.
Unit Testing: This section discusses how to use the internal Python module
unittest to automate Python code testing.
Test-Driven Development 9
Caution
FOUNDATION TOPICS
With the previous code, you now have a file handling object
named readdata, and you can use methods to interact with the
file methods. To print the contents of the file, you can use the
following:
print(readdata.read())
readdata.close()
Keeping track of the state of the file lock and whether you
opened and closed it can be a bit of a chore. Python provides
another way you can use to more easily work with files as well
as other Python objects. The with statement (also called a
context manager in Python) uses the open() function but
doesn’t require direct assignment to a variable. It also has
better exception handling and automatically closes the file for
you when you have finished reading in or writing to the file.
Here’s an example:
This is much simpler code, and you can use all of the same
methods to interact with the files as before. To write to a file,
you can use the same structure, but in this case, because you
want to append some data to the file, you need to change how
you open the file to allow for writing. In this example, you can
use "a+" to allow reading and appending to the end of the file.
Here is what the code would look like:
Notice the newline in front of the text you are appending to the
file. It appears here so that it isn’t just tacked on at the very
end of the text. Now you can read the file and see what you
added:
print(data.read())
"router1","192.168.10.1","Nashville"
"router2","192.168.20.1","Tampa"
"router3","192.168.30.1","San Jose"
>>> sampledata
In this example, you now have a list of lists that includes each
row of data. If you wanted to manipulate this data, you could
because it’s now in a native format for Python. Using list
notation, you can extract individual pieces of information:
>>> sampledata[0]
['router1', '192.168.10.1', 'Nashville']
>>> sampledata[0][1]
'192.168.10.1'
import csv
csv_list = csv.reader(data)
device = row[0]
location = row[2]
ip = row[1]
print(f"{device} is in
{location.rstrip()} and has IP
{ip}.")
import csv
If you run the code shown in Example 5-1 and input details for
router 4, now when you display the router list, you have the
{
"interface": {
"name": "GigabitEthernet1",
"description": "Router Uplink",
"enabled": true,
In Example 5-2, you can see the structure that JSON provides.
interface is the main data object, and you can see that its value
is multiple key/value pairs. This nesting capability allows you
to structure very sophisticated data models. Notice how similar
to a Python dictionary the data looks. You can easily convert
JSON to lists (for a JSON array) and dictionaries (for JSON
objects) with the built-in JSON module. There are four
functions that you work with to perform the conversion of
JSON data into Python objects and back.
load(): This allows you to import native JSON and convert it to a Python
dictionary from a file.
loads(): This will import JSON data from a string for parsing and
manipulating within your program.
dump(): This is used to write JSON data from Python objects to a file.
dumps(): This allows you to take JSON dictionary data and convert it into a
serialized string for parsing and manipulating within Python.
json_dict = json.loads(json_data)
>>> type(json_dict)
<class 'dict'>
>>> print(json_dict)
>>> json_dict["interface"]["description"] =
"Backup Link"
>>> print(json_dict)
In order to save the new json object back to a file, you have to
use the dump() function (without the s) to convert the Python
dictionary back into a JSON file object. To make it easier to
read, you can use the indent keyword:
Now if you load the file again and print, you can see the stored
changes, as shown in Example 5-3.
Example 5-3 Loading the JSON File and Printing the Output
to the Screen
Click here to view code image
<device>
<Hostname>Rtr01</Hostname>
<IPv4>192.168.1.5</IP4>
<IPv6> </IPv6>
</device>
To work with this, you can use the native XML library, but it
has a bit of a learning curve and can be a little hard to use if
you just want to convert XML into something you can work
import xmltodict
xml_dict = xmltodict.parse(xml_example)
>>> print(xml_dict)
OrderedDict([('interface',
OrderedDict([('@xmlns', 'ietf-interfaces'),
('name',
'GigabitEthernet2'), ('description', 'Wide Area
Network'), ('enabled', 'true'),
('ipv4', OrderedDict([('address',
OrderedDict([('ip', '192.168.0.2'), ('netmask',
'255.255.255.0')]))]))]))])
Now that you have the XML in a Python dictionary, you can
modify an element, as shown here:
>>> print(xmltodict.unparse(xml_dict,
pretty=True))
<?xml version="1.0" encoding="utf-8"?>
<interface xmlns="ietf-interfaces">
<name>GigabitEthernet2</name>
<description>Wide Area
Network</description>
<enabled>true</enabled>
<ipv4>
<address>
<ip>192.168.55.3</ip>
<netmask>255.255.255.0</netmask>
</address>
</ipv4>
</interface>
To write these changes back to your original file, you can use
the following code:
data.write(xmltodict.unparse(xml_dict,
pretty=True))
---
interface:
name: GigabitEthernet2
description: Wide Area Network
enabled: true
ipv4:
address:
- ip: 172.16.0.2
netmask: 255.255.255.0
Notice that a YAML object has minimal syntax, all data that is
related has the same indentation level, and key/value pairs are
---
addresses:
- ip: 172.16.0.2
netmask: 255.255.255.0
- ip: 172.16.0.3
netmask: 255.255.255.0
- ip: 172.16.0.4
netmask: 255.255.255.0
yaml_sample = data.read()
yaml_dict = yaml.load(yaml_sample,
Loader=yaml.FullLoader)
>>> type(yaml_dict)
<class 'dict'>
>>> yaml_dict
>>> print(yaml.dump(yaml_dict,
default_flow_style=False))
interface:
description: Wide Area Network
enabled: true
ipv4:
address:
- ip: 192.168.0.2
netmask: 255.255.255.0
name: GigabitEthernet1
default_flow_style=False))
You have seen quite a bit of file access in this chapter. What
happens if you ask the user for the filename instead of hard-
coding it? If you did this, you would run the risk of a typo
halting your program. In order to add some error handling to
your code, you can use the try statement. Example 5-8 shows
an example of how this works.
x = 0
while True:
try:
filename = input("Which file would you
like to open? :")
with open(filename, "r") as fh:
file_data = fh.read()
except FileNotFoundError:
print(f'Sorry, {filename} doesn't
exist! Please try again.')
else:
print(file_data)
x = 0
break
finally:
x += 1
if x == 3:
print('Wrong filename 3
times.\nCheck name and Rerun.')
break
Here is what the program output would look like with a valid
test.txt file in the script directory:
Here is what the output would look like with three wrong
choices:
TEST-DRIVEN DEVELOPMENT
Test-driven development (TDD) is an interesting concept that
Step 1. Write a test: Write a test that tests for the new class
or function that you want to add to your code. Think
about the class name and structure you will need in
order to call the new capability that doesn’t exist yet
—and nothing more.
Step 2. Test fails: Of course, the test fails because you
haven’t written the part that works yet. The idea here
is to think about the class or function you want and
test for its intended output. This initial test failure
shows you exactly where you should focus your
code writing to get it to pass. This is like starting
with your end state in mind, which is the most
effective way to accomplish a goal.
Step 3. Write some code: Write only the code needed to
make the new function or class successfully pass.
This is about efficiency and focus.
Step 4. Test passes: The test now passes, and the code
works.
Step 5. Refactor: Clean up the code as necessary, removing
any test stubs or hard-coded variables used in
testing. Refine the code, if needed, for speed.
TDD may see like a waste of time initially. Why write tests for
stuff you know isn’t going to pass? Isn’t all of this testing just
wasted effort? The benefit of this style of development is that
it starts with the end goal in mind, by defining success right
away. The test you create is laser focused on the application’s
purpose and a clear outcome. Many programmers add too
UNIT TESTING
Testing your software is not optional. Every script and
application that you create have to go through testing of some
sort. Maybe it’s just testing your syntax in the interactive
interpreter or using an IDE and trying your code as you write
it. While this is software testing, it’s not structured and often is
not repeatable. Did you test all options? Did you validate your
expectations? What happens if you send unexpected input to a
function? These are some of the reasons using a structured and
automated testing methodology is crucial to creating resilient
software.
There are other types of testing that you may hear about, such
as integration testing and functional testing. The differences
def area_of_circle(r):
return pi*(r**2)
import unittest
from areacircle import area_of_circle
from math import pi
Next, you need to create a class for your test. You can name it
whatever you want, but you need to inherit unittest.TestCase
from the unittest module. This is what enables the test function
methods to be assigned to your test class. Next, you can define
your first test function. In this case, you can test various inputs
to validate that the math in your function under test is working
as it should. You will notice a new method called
assertAlmostEqual(), which takes the function you are
testing, passes a value to it, and checks the returned value
against an expected value. You can add a number of tests to
this function. This is what the test now looks like with the
additional code:
class
Test_Area_of_Circle_input(unittest.TestCase):
def test_area(self):
# Test radius >= 0
self.assertAlmostEqual(area_of_circle(1),
pi)
self.assertAlmostEqual(area_of_circle(0),
0)
self.assertAlmostEqual(area_of_circle(3.5), pi *
3.5**2)
You can go to the directory where these two scripts reside and
enter python -m unittest test_areacircle.py to run the test. If
you don’t want to type all that, you can add the following to
the bottom of the test_areacircle.py script to allow the unittest
module to be launched when you run the test script:
if __name__ == '__main__':
unittest.main()
All this does is check to see if the script is being run directly
(because the __main__ special case is an attribute for all
Python scripts run from the command line) and call the
unittest.main() function. After executing the function, you
should see the following results:
.
-------------------------------------------------
------------------
OK
The dot at the top shows that 1 test ran (even though you had
multiple checks in the same function) to determine whether the
def test_values(self):
# Test that bad values are caught
self.assertRaises(ValueError,
area_of_circle, -1)
Example 5-9 shows the output of the test with this additional
check.
.F
=========================================
FAIL: test_values
(__main__.Test_Area_of_Circle_input)
-----------------------------------------------
-----------------------------------------------
-----------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
The first check is still good, so you see one dot at the top, but
next to it is a big F for fail. You get a message saying that the
test_value function is where it failed, and see that your
original function did not catch this error. This means that the
code is giving bad results. A radius of -1 is not possible, but
the function gives you the following output:
>>> area_of_circle(-1)
3.141592653589793
def area_of_circle(r):
if r < 0:
raise ValueError('Negative radius value
error')
return pi*(r**2)
Now when you try the test from the interpreter, you see an
error raised:
>>> area_of_circle(-1)
File
"/Users/chrijack/Documents/ccnadevnet/areacircle.
py",
line 5, in area_of_circle
If you rerun the unit test, you see that it now passes the new
check because an error is raised:
-------------------------------------------------
------------------
OK
Application Programming
Interfaces (APIs)
This chapter covers the following topics:
Application Programming Interfaces (APIs): This section describes what
APIs are and what they are used for.
RESTful API Authentication: This section covers various aspects of the API
authentication methods and the importance of API security.
Simple Object Access Protocol (SOAP): This section examines SOAP and
common examples of when and where this protocol is used.
FOUNDATION TOPICS
APPLICATION PROGRAMMING
INTERFACES (APIS)
For communicating with and configuring networks, software
developers commonly use application programming interfaces
(APIs). APIs are mechanisms used to communicate with
applications and other software. They are also used to
communicate with various components of a network through
Northbound APIs
Northbound APIs are often used for communication from a
network controller to its management software. For example,
Cisco DNA Center has a software graphical user interface
(GUI) that is used to manage its own network controller.
Typically, when a network operator logs into a controller to
Note
RESTful APIs are covered in an upcoming section of this
chapter and in depth in Chapter 7, “RESTful API Requests
and Responses.”
Southbound APIs
If a network operator makes a change to a switch’s
configuration in the management software of the controller,
those changes will then be pushed down to the individual
devices using a southbound API. These devices can be routers,
switches, or even wireless access points. APIs interact with the
components of a network through the use of a programmatic
interface. Southbound APIs can modify more than just the data
plane on a device.
Note
Chapter 7 provides more detail on HTTP and CRUD
functions as well as response codes.
Note
Cisco DevNet is covered in Chapter 1, “Introduction to
Cisco DevNet Associate Certification.”
Basic Authentication
Basic authentication, illustrated in Figure 6-4, is one of the
simplest and most common authentication methods used in
APIs. The downfall of basic authentication is that the
credentials are passed unencrypted. This means that if the
transport is simple HTTP, it is possible to sniff the traffic and
capture the username and password with little to no effort. The
lack of encryption means that the credentials are in simple
plaintext base 64 encoding in the HTTP header. However,
basic authentication is more commonly used with SSL or TLS
to prevent such attacks.
API Keys
Some APIs use API keys for authentication. An API key is a
predetermined string that is passed from the client to the
server. It is intended to be a pre-shared secret and should not
be well known or easy to guess because it functions just like a
password. Anyone with this key can access the API in
question and can potentially cause a major outage and gain
access to critical or sensitive data. An API key can be passed
to the server in three different ways:
String
Request header
Cookie
GET /something?api_key=abcdef12345
Custom Tokens
A custom token allows a user to enter his or her username and
password once and receive a unique auto-generated and
encrypted token. The user can then use this token to access
protected pages or resources instead of having to continuously
enter the login credentials. Tokens can be time bound and set
to expire after a specific amount of time has passed, thus
forcing users to reauthenticate by reentering their credentials.
A token is designed to show proof that a user has previously
authenticated. It simplifies the login process and reduces the
number of times a user has to provide login credentials. A
token is stored in the user’s browser and gets checked each
time the user tries to access information requiring
authentication. Once the user logs out of the web browser or
website, the token is destroyed so it cannot be compromised.
Figure 6-5 provides an overview of token-based authentication
between a client and a server.
Envelope
Header
Body
Fault (optional)
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-
envelope" xmlns:m="http://
www.example.org">
<soap:Header>
SOA Description
P
Fault
Cod
e
M This is a child element of the SOAP header. If this attribute is set, any
us information that was not understood triggers this fault code.
tU
<env:Envelope
xmlns:env="http://www.w3.org/2003/05/soap-
envelope"
xmlns:m="http://www.example.org/timeouts"
xmlns:xml="http://www.w3.org/XML/1998/namespace
">
<env:Body>
<env:Fault>
<env:Code>
<env:Value>env:Sender</env:Value>
<env:Subcode>
<env:Value>m:MessageTimeout</env:Value>
</env:Subcode>
</env:Code>
<env:Reason>
<env:Text xml:lang="en">Sender
Timeout</env:Text>
</env:Reason>
<env:Detail>
<m:MaxTime>P5M</m:MaxTime>
</env:Detail>
</env:Fault>
</env:Body>
</env:Envelope>
Note
The examples used in this chapter are all based on SOAP
version 1.2.
<?xml version="1.0"?>
<methodCall>
<methodName>examples.getStateName</methodName>
<params>
<param>
<value><i4>21</i4></value>
</param>
</params>
</methodCall>
You can see in Example 6-6 that the format of XML is very
similar to that of SOAP, making these messages simple for
humans to read and digest and also to build. Example 6-7
shows an example of an XML-RPC reply or response
message, in which the response to the GET message from
Example 6-6 is Illinois.
<?xml version="1.0"?>
<methodResponse>
<params>
<param>
<value><string>Illinois</string>
</value>
</param>
</params>
</methodResponse>
REST Tools: This section covers sequence diagrams and tools such as
Postman, curl, HTTPie, and the Python Requests library that are used to make
basic REST calls.
REST Constraints 6, 7
REST Tools 8
Caution
The goal of self-assessment is to gauge your mastery of the
FOUNDATION TOPICS
API Types
APIs can be broadly classified into three categories, based on
the type of work that each one provides:
Private: A private API is for internal use only. This access type gives a
company the most control over its API.
Partner: A partner API is shared with specific business partners. This can
provide additional revenue streams without compromising quality.
HTTP Basics
A web browser is a classic example of an HTTP client.
Communication in HTTP centers around a concept called the
request/response cycle, in which the client sends the server a
request to do something. The server, in turn, sends the client a
response saying whether or not the server can do what the
client asked. Figure 7-5 provides a very simple illustration of
how a client requests data from a server and how the server
responds to the client.
Method
List of headers
Body
Server/host address
Resource
Parameters
As you can see, the server or host address is the unique server
name, /api/rooms/livingroom defines a resource to access, and
lights?state=ON is the parameter to send in order to take some
action.
Method
HTTP defines a set of request methods, outlined in Table 7-2.
A client can use one of these request methods to send a request
message to an HTTP server.
Met Explanation
G A client can use a GET request to get a web resource from the server.
E
T
H A client can use a HEAD request to get the header that a GET request
E would have obtained. Because the header contains the last-modified
A date of the data, it can be used to check against the local cache copy.
D
P A client can use a POST request to post data or add new data to the
O server.
S
T
P A client can use a PUT request to ask a server to store or update data.
U
T
Request URI: Specifies the path of the resource requested, which must begin
from the root / of the document base directory.
Request headers (optional): The client can use optional request headers (such
as accept and accept language) to negotiate with the server and ask the server
to deliver the preferred contents (such as in the language the client prefers).
Request URI: Specifies the path of the resource requested, which must begin
from the root / of the document base directory.
Request headers (optional): The client can use optional request headers, such
as content type and content length to inform the server of the media type and
the length of the request body, respectively.
HTTP Headers
The HTTP headers and parameters provide of a lot of
Request authorization
Response caching
Response cookies
Request Headers
The request headers appear as name:value pairs. Multiple
values, separated by commas, can be specified as follows:
request-header-name: request-header-value1,
request-header-value2, ...
Host: myhouse.cisco.com
Connection: Keep-Alive
Accept: image/gif, image/jpeg, */*
Response Headers
The response headers also appear as name:value pairs. As with
request headers, multiple values can be specified as follows:
response-header-name: response-header-value1,
response-header-value2, ...
Content-Type: text/html
Content-Length: 35
Connection: Keep-Alive
Accept-Charset: This request header tells the server which character sets are
acceptable by the client.
Cache-Control: This header is the cache policy defined by the server. For this
response, a cached response can be stored by the client and reused until the
time defined in the Cache-Control header.
Response Codes
The first line of a response message (that is, the status line)
contains the response status code, which the server generates
to indicate the outcome of the request. Each status code is a
three-digit number:
3 Found This is the same as code 301, but the new location is
0 and temporary in nature. The client should issue a new request,
2 redire but applications need not update the references.
ct (or
move
tempo
rarily)
ed
4 Reque The request sent to the server took longer than the website’s
0 st server was prepared to wait.
8 timeo
ut
4 Reque The URI requested by the client is longer than the server is
1 st URI willing to interpret.
4 too
large
XML
JSON
JSON, short for JavaScript Object Notation, is pronounced
like the name “Jason.” The JSON format is derived from
JavaScript object syntax, but it is entirely text based. It is a
key: value data format that is typically rendered in curly
braces {} and square brackets []. JSON is readable and
lightweight, and it is easy for humans to understand.
A key/value pair has a colon (:) that separates the key from the
value, and each such pair is separated by a comma in the
document or the response.
JSON keys are valid strings. The value of a key is one of the
following data types:
String
Number
Object
Array
Null
{
"home": [
"this is my house",
"located in San Jose, CA"
],
"rooms": {
"living_room": "true",
"kitchen": "false",
"study_room": [
{
"size": "20x30"
},
{
"desk": true
},
{
"lights": "On"
}
]
}}
YAML
Dictionary mappings: These are similar to scalars but can contain nested
data, including other data types.
---
home:
- this is my house
- located in San Jose, CA
rooms:
living_room: 'true'
kitchen: 'false'
study_room:
- size: 20x30
- desk: true
- lights: 'On'
Webhooks
Webhooks are user-defined HTTP callbacks. A webhook is
triggered by an event, such as pushing code to a repository or
Sequence Diagrams
Now that you understand the fundamentals of REST API
(request, response, and webhooks), authentication, data
exchange, and constraints that go with rest APIs, it’s time to
introduce sequence diagrams. A sequence diagram models the
REST CONSTRAINTS
REST defines six architectural constraints that make any web
service a truly RESTful API. These are constraints also known
as Fielding’s constraints (see
https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm).
They generalize the web’s architectural principles and
represent them as a framework of constraints or an
architectural style. These are the REST constraints:
Client/server
Stateless
Cache
Uniform interface
Layered system
Code on demand
Client/Server
The client and server exist independently. They must have no
dependency of any sort on each other. The only information
needed is for the client to know the resource URIs on the
Stateless
REST services have to be stateless. Each individual request
contains all the information the server needs to perform the
request and return a response, regardless of other requests
made by the same API user. The server should not need any
additional information from previous requests to fulfill the
current request. The URI identifies the resource, and the body
contains the state of the resource. A stateless service is easy to
scale horizontally, allowing additional servers to be added or
removed as necessary without worry about routing subsequent
requests to the same server. The servers can be further load
balanced as necessary.
Cache
With REST services, response data must be implicitly or
explicitly labeled as cacheable or non-cacheable. The service
indicates the duration for which the response is valid. Caching
helps improve performance on the client side and scalability
on the server side. If the client has access to a valid cached
response for a given request, it avoids repeating the same
request. Instead, it uses its cached copy. This helps alleviate
some of the server’s work and thus contributes to scalability
and performance.
Uniform Interface
The uniform interface is a contract for communication
between a client and a server. It is achieved through four
subconstraints:
Layered System
Code on Demand
Code on demand is an optional constraint that gives the client
flexibility by allowing it to download code. The client can
request code from the server, and then the response from the
server will contain some code, usually in the form of a script,
when the response is in HTML format. The client can then
execute that code.
URI path versioning: In this strategy, the version number of the API is
included in the URL path.
Custom headers: REST APIs are versioned by providing custom headers with
the version number included as an attribute. The main difference between this
approach and the two previous ones is that it doesn’t clutter the URI with
versioning information.
Pagination
Offset-based pagination
Note that the data returned by the service usually has links to
the next and the previous pages, as shown in Example 7-6.
GET /devices?offset=100&limit=10
{
"pagination": {
"offset": 100,
"limit": 10,
"total": 220,
},
"device": [
//...
],
"links": {
"next": "http://myhouse.cisco.com/devices?
offset=110&limit=10",
"prev": "http://myhouse.cisco.com/devices?
offset=90&limit=10"
}
}
Business impact: One approach to API rate limiting is to offer a free tier and a
premium tier, with different limits for each tier. Limits could be in terms of
sessions or in terms of number of APIs per day or per month. There are many
factors to consider when deciding what to charge for premium API access. API
providers need to consider the following when setting up API rate limits:
Do new calls and requests receive a particular error code and, if so, which
one?
Efficiency: Unregulated API requests usually and eventually lead to slow page
load times for websites. Not only does this leave customers with an
unfavorable opinion but it can lower your service rankings.
Cache your own data when you need to store specialized values or rapidly
review very large data sets.
REST TOOLS
Understanding and testing REST API architecture when
engaging in software development is crucial for any
development process. The following sections explore a few of
the most commonly used tools in REST API testing and how
to use some of their most important features. Based on this
information, you will get a better idea of how to determine
which one suits a particular development process the best.
Postman
One of the most intuitive and popular HTTP clients is a tool
called Postman (https://www.getpostman.com/downloads/). It
has a very simple user interface and is very easy to use, even if
you’re just starting out with RESTful APIs. It can handle the
following:
Writing tests (scripting requests with the use of dynamic variables, passing
data between requests, and so on)
It is possible to generate code for any REST API call that you
try in Postman. After a GET or POST call is made, you can
use the Generate Code option and choose the language you
prefer. Figure 7-15 shows an example of generating Python
code for a simple GET request.
curl
curl is an extensive command-line tool that can be downloaded
from https://curl.haxx.se. curl can be used on just about any
platform on any hardware that exists today. Regardless of what
you are running and where, the most basic curl commands just
work.
-d: This option allows you to pass data to the remote server. You can either
embed the data in the command or pass the data using a file.
-H: This option allows you to add an HTTP header to the request.
-c: This option stores data received by the server. You can reuse this data in
subsequent commands with the -b option.
-X: This option allows you to specify the HTTP method, which normally
defaults to GET.
{"args":{"test":"123"},"headers":{"x-forwarded-
proto":"https","host":"postman-
echo.com","accept":"*/*","user-
agent":"curl/7.54.0","x-forward-
ed-port":"443"},"url":"https://postman-
echo.com/get?test=123"}
{"args":{},"data":"hello DevNet","files":
{},"form":{},"headers":{"x-forwarded-
proto":"https","host":"postman-
echo.com","content-
length":"12","accept":"*/*","ca
che-control":"no-cache","content-
type":"text/plain","user-
agent":"curl/7.54.0","x-
forwarded-
port":"443"},"json":null,"url":"https://postman
-echo.com/post"}
HTTPie
HTTPie is a modern, user-friendly, and cross-platform
command-line HTTP client written in Python. It is designed to
make CLI interaction with web services easy and user friendly.
Its simple HTTP commands enable users to send HTTP
requests using intuitive syntax. HTTPie is used primarily for
testing, trouble-free debugging, and interacting with HTTP
servers, web services, and RESTful APIs. For further
information on HTTPie documentation, downloading, and
installation, see https://httpie.org/doc:
$ http https://postman-echo.com/get?test=123
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 179
Content-Type: application/json; charset=utf-8
Date: Tue, 27 Aug 2019 05:27:17 GMT
ETag: W/"ed-mB0Pm0M3ExozL3fgwq7UlH9aozQ"
Server: nginx
Vary: Accept-Encoding
set-cookie:
sails.sid=s%3AYCeNAWJG7Kap5wvKPg8HYlZI5SHZoqEf.
r7Gi96fe5g7%2FSp0jaJk%2Fa
VRpHZp3Oj5tDxiM8TPZ%2Bpc; Path=/; HttpOnly
{
"args": {
"test": "123"
},
"headers": {
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"host": "postman-echo.com",
"user-agent": "HTTPie/1.0.2",
"x-forwarded-port": "443",
"x-forwarded-proto": "https"
},
"url": "https://postman-echo.com/get?
test=123"
}
You can use Requests with Python versions 2.7 and 3.x.
Requests is an external module, so it needs to be installed
before you can use it. Example 7-11 shows the command you
use to install the Requests package for Python.
import requests
import requests
url = "https://postman-echo.com/post"
payload = "hello DevNet"
headers = {'content-type': "text/plain"}
response = requests.request("POST", url,
data=payload, headers=headers)
print(response.text)
import requests
url = "https://postman-echo.com/basic-auth"
headers = {
'authorization': "Basic
cG9zdG1hbjpwYXNzd29yZA=="
}
response = requests.request("GET", url,
headers=headers)
print(response.text)
API
REST
CRUD
YAML
JSON
webhook
Cisco Meraki: This section covers the Cisco Meraki platform and the REST
APIs it exposes.
Cisco DNA Center: This section covers Cisco DNA Center and the REST
APIs that it publicly exposes.
Cisco SD-WAN: This section covers Cisco SD-WAN and the REST APIs
exposed through Cisco vManage.
FOUNDATION TOPICS
WHAT IS AN SDK?
An SDK (software development kit) or devkit is a set of
software development tools that developers can use to create
software or applications for a certain platform, operating
system, computer system, or device. An SDK typically
contains a set of libraries, APIs, documentation, tools, sample
code, and processes that make it easier for developers to
integrate, develop, and extend the platform. An SDK is created
for a specific programming language, and it is very common to
have the same functionality exposed through SDKs in different
programming languages.
Is easy to use
Is well documented
Quicker integration
Brand control
Increased security
Metrics
The starting point in exploring all the SDKs that Cisco has to
offer is https://developer.cisco.com. As you will see in the
following sections of this chapter and throughout this book,
CISCO MERAKI
Meraki became part of Cisco following its acquisition in 2012.
The Meraki portfolio is large, comprising wireless, switching,
security, and video surveillance products. The differentiating
factor for Meraki, compared to similar products from Cisco
and other vendors, is that management is cloud based. Explore
all the current Cisco Meraki products and offerings at
https://meraki.cisco.com.
Scanning API
Dashboard API
To get access to the Dashboard API, you first need to enable it.
Begin by logging into the Cisco Meraki dashboard at
Networks
Devices
Uplink
startingAfter: A value used to indicate that the returned data will start
immediately after this value
endingBefore: A value used to indicate that the returned data will end
immediately before this value
curl -I -X GET \
--url
'https://api.meraki.com/api/v0/organizations' \
-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'
You can see in Example 8-1 that the response code for the
request is 302. This indicates a redirect to the URL value in
the Location header. Redirects like the one in Example 8-1 can
occur with any API call within the Dashboard API, including
POST, PUT, and DELETE. For GET calls, the redirect is
specified through a 302 status code, and for any non-GET
calls, the redirects are specified with 307 or 308 status codes.
When you specify the -I option for curl, only the headers of
the response are displayed to the user. At this point, you need
to run the curl command again but this time specify the
resource as https://n149.meraki.com/api/v0/organizations,
remove the -I flag, and add an Accept header to specify that
the response to the call should be in JSON format. The
command should look like this:
curl -X GET \
--url
'https://n149.meraki.com/api/v0/organizations' \
-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'\
-H 'Accept: application/json'
[
{
"id" : "549236"
}
Now let’s look at how you can obtain the organization ID for
the Cisco DevNet Sandbox Meraki account by using Postman.
As mentioned in Chapter 7, Postman is a popular tool used to
explore APIs and create custom requests; it has extensive
built-in support for different authentication mechanisms,
headers, parameters, collections, environments, and so on. By
default, Postman has the Automatically Follow Redirects
option enabled in Settings, so you do not have to change the
https://api.meraki.com/api/v0/organizations resource as it is
already done in the background by Postman. If you disable the
Let’s explore the Meraki Dashboard API further and obtain the
networks associated with the DevNet Sandbox organization. If
you look up the API documentation at
https://developer.cisco.com/meraki/api/#/rest/api-
endpoints/networks/get-organization-networks, you see that in
order to obtain the networks associated with a specific
organization, you need to do a GET request to
curl -X GET \
--url
'https://n149.meraki.com/api/v0/organizations/549
236/
networks' \
-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694c
f29a86ed87'\
-H 'Accept: application/json'
The response from the API should contain a list of all the
networks that are part of the DevNet Sandbox organization
and should look similar to the output in Example 8-2.
[
{
"timeZone" : "America/Los_Angeles",
"tags" : " Sandbox ",
"organizationId" : "549236",
"name" : "DevNet Always On Read Only",
The output in Example 8-2 shows a list of all the networks that
are part of the DevNet Sandbox organization with an ID of
549236. For each network, the output contains the same
information found in the Meraki dashboard. You should make
a note of the first network ID returned by the API as you will
need it in the next step in your exploration of the Meraki
Dashboard API.
Now you can try to get the same information—a list of all the
networks that are part of the DevNet Sandbox organization—
by using Postman. As you’ve seen, Postman by default does
the redirection automatically, so you can specify the API
endpoint as
https://api.meraki.com/api/v0/organizations/549236/networks.
You need to make sure to specify the GET method, the X-
Cisco-Meraki-API-Key header for authentication, and the
Accept header, in which you specify that you would like the
Next, you can obtain a list of all devices that are part of the
network that has the name “DevNet Always On Read Only”
and ID L_646829496481099586. Much as in the previous
steps, you start by checking the API documentation to find the
API endpoint that will return this data to you. The API
resource that contains the information you are looking for is
/networks/{networkId}/devices, as you can see from the API
documentation at the following link:
https://developer.cisco.com/meraki/api/#/rest/api-
curl -X GET \
--url
'https://n149.meraki.com/api/v0/networks/L_646829
496481099586/
devices' \
-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'\
-H 'Accept: application/json'
And the response from the API should be similar to the one in
Example 8-3.
[
{
"wan2Ip" : null,
"networkId" : "L_646829496481099586",
"lanIp" : "10.10.10.106",
"serial" : "QYYY-WWWW-ZZZZ",
"tags" : " recently-added ",
"lat" : 37.7703718,
Following the process used so far, you can obtain the same
information but this time using Postman. Since the redirection
is automatically done for you, the API endpoint for Postman is
https://api.meraki.com/api/v0/networks/L_6468294964810995
86/devices. You populate the two headers Accept and X-
Cisco-Meraki-API-Key with their respective values, select the
#! /usr/bin/env python
from meraki_sdk.meraki_sdk_client import
MerakiSdkClient
PARAMS = {}
PARAMS["organization_id"] = "549236" # Demo
Organization "DevNet Sandbox"
Finally, you get the list of devices that are part of the network
and have the ID L_646829496481099586. Recall from earlier
that this ID is for the “DevNet Always on Read Only”
network. In this case, you use the
devices.get_network_devices() method of the Meraki API
client instance and store the result in the DEVICES variable.
You iterate over the DEVICES variable and, for each device
in the list, extract and print to the console the device model,
the serial number, the MAC address, and the firmware version.
Intent API
Integration API
Multivendor SDK
Know Your Network category: This category contains API calls pertaining
to sites, networks, devices, and clients:
With the Site Hierarchy Intent API, you can get information about, create,
update, and delete sites as well as assign devices to a specific site. (Sites
within Cisco DNA Center are logical groupings of network devices based
on a geographic location or site.)
The Network Health Intent API retrieves data regarding network devices,
their health, and how they are connected.
The Client Health Intent API returns overall client health information for
both wired and wireless clients.
The Client Detail Intent API returns detailed information about a single
client.
The Site Profile Intent API gives you the option to provision NFV and
ENCS devices as well as retrieve the status of the provisioning activities.
The Plug and Play (PnP) API enables you to manage all PnP-related
workflows. With this API, you can create, update, and delete PnP
workflows and PnP server profiles, claim and unclaim devices, add and
remove virtual accounts, and retrieve information about all PnP-related
tasks.
Operational Tools category: This category includes APIs for the most
commonly used tools in the Cisco DNA Center toolbelt:
The Command Runner API enables the retrieval of all valid keywords
that Command Runner accepts and allows you to run read-only
commands on devices to get their real-time configuration.
The File API enables you to retrieve files such as digital certificates,
maps, and SWIM files from Cisco DNA Center.
The Task API provides information about the network actions that are
being run asynchronously. Each of these background actions can take
from seconds to minutes to complete, and each has a task associated with
it. You can query the Task API about the completion status of these tasks,
get the task tree, retrieve tasks by their IDs, and so on.
The Tag API gives you the option of creating, updating, and deleting tags
as well as assigning tags to specific devices. Tags are very useful in Cisco
DNA Center; they are used extensively to group devices by different
criteria. You can then apply policies and provision and filter these groups
of devices based on their tags.
The Cisco DNA Center platform APIs are rate limited to five
API requests per minute.
So far in this section, we’ve covered all the APIs and the
multivendor SDK offered by Cisco DNA Center. Next, we will
start exploring the Intent API, using Cisco DNA Center
version 1.3 for the rest of the chapter. As API resources and
endpoints exposed by the Cisco DNA Center platform might
change in future versions of the software, it is always best to
start exploring the API documentation for any Cisco product at
https://developer.cisco.com/docs/dna-center/api/1-3-0-x/.
For this section, you can use the always-on DevNet Sandbox
for Cisco DNA Center at https://sandboxdnac2.cisco.com. The
username for this sandbox is devnetuser, and the password is
Cisco123!. You need to get authorized to the API and get the
token that you will use for all subsequent API calls. The Cisco
DNA Center platform API authorization is based on basic
auth. Basic auth, as you learned in Chapter 7, is an
authorization type that requires a username and password to
access an API endpoint. In the case of Cisco DNA Center, the
username and password mentioned previously are base-64
encoded and then transmitted to the API service in the
Authorization header. The are many online services that can
do both encoding and decoding of base-64 data for you, or as a
fun challenge you can look up how to do it manually. The
username devnetuser and the password Cisco123! become
ZGV2- bmV0dXNlcjpDaXNjbzEyMyE= when they are
base-64 encoded. The only missing component is the resource
that you need to send the authorization request to. You verify
curl -X POST \
https://sandboxdnac2.cisco.com/dna/system/api/v1/
auth/token \
-H 'Authorization: Basic
ZGV2bmV0dXNlcjpDaXNjbzEyMyE='
The result should be JSON formatted with the key Token and a
value containing the actual authorization token. It should look
similar to the following:
{"Token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.ey
JzdWIiOiI1Y2U3M-
TJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhdXRoU291cmNlIjoiaW
50ZXJuYWwiL-
The body of the response for the Postman request should look
as follows:
"Token":
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI
1Y2U3MTJiMDhlZ-
TY2MjAyZmEyZWI4ZjgiLCJhdXRoU291cmNlIjoiaW50ZXJuYW
wiLCJ0ZW5hbnROY-
W1lIjoiVE5UMCIsInJvbGVzIjpbIjViNmNmZGZmNDMwOTkwMD
A4OWYwZmYzNyJdL-
CJ0ZW5hbnRJZCI6IjViNmNmZGZjNDMwOTkwMDA4OWYwZmYzMC
IsImV4cCI6MTU2N-
jU5NzE4OCwidXNlcm5hbWUiOiJkZXZuZXR1c2VyIn0.ubXSmZ
YrI-yoCWmzCSY486y-
HWhwdTlnrrWqYip5lv6Y"
As with the earlier curl example, this token will be used in all
subsequent API calls performed in the rest of this chapter. The
Let’s now get a list of all the network devices that are being
managed by the instance of Cisco DNA Center that is running
in the always-on DevNet Sandbox you’ve just authorized with.
If you verify the Cisco DNA Center API documentation on
https://developer.cisco.com/docs/dna-center/api/1-3-0-x/, you
can see that the API resource that will return a complete list of
all network devices managed by Cisco DNA Center is
/dna/intent/api/v1/network-device. Figure 8-7 shows the online
documentation for Cisco DNA Center version 1.3.
With all this information in mind, you can craft the curl
request to obtain a list of all the network devices managed by
the Cisco DevNet always-on DNA Center Sandbox. The
complete URL is
https://sandboxdnac2.cisco.com/dna/intent/api/v1/network-
device. You need to retrieve information through the API, so
we need to do a GET request; don’t forget the X-Auth-Token
curl -X GET \
https://sandboxdnac2.cisco.com/dna/intent/api/v1/
network-device \
-H 'X-Auth-Token:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.
eyJzdWIiOiI1Y2U3MTJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhd
XRoU291c-
mNlIjoiaW50ZXJuYWwiLCJ0ZW5hbnROYW1lIjoiVE5UMCIsIn
JvbGVzIjpbI-
jViNmNmZGZmNDMwOTkwMDA4OWYwZmYzNyJdLCJ0ZW5hbnRJZC
I6IjViNmNmZG-
ZjNDMwOTkwMDA4OWYwZmYzMCIsImV4cCI6MTU2NjYwODAxMSw
idXNlcm5hbWUiOi-
JkZXZuZXR1c2VyIn0.YXc_2o8FDzSQ1YBhUxUIoxwzYXXWYeN
JRkB0oKBlIHI'
{
"response" : [
{
"type" : "Cisco 3504 Wireless LAN
Controller",
For each device, you can see extensive information such as the
hostname, uptime, serial number, software version,
management interface IP address, reachability status, hardware
platform, and role in the network. You can see here the power
of the Cisco DNA Center platform APIs. With one API call,
you were able to get a complete status of all devices in the
network. Without a central controller like Cisco DNA Center,
it would have taken several hours to connect to each device
individually and run a series of commands to obtain the same
Now you will see how to obtain the same information you just
got with curl but now using Postman. The same API endpoint
URL is used:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/network-
device. In this case, it is a GET request, and the X-Auth-Token
header is specified under the Headers tab and populated with a
valid token. If you click Send and there aren’t any mistakes
with the request, the status code should be 200 OK, and the
body of the response should be very similar to that obtained
with the curl request. Figure 8-8 shows how the Postman
interface should look in this case.
Now you can try to obtain some data about the clients that are
connected to the network managed by Cisco DNA Center.
Much like network devices, network clients have associated
health scores, provided through the Assurance feature to get a
quick overview of client network health. This score is based
on several factors, including onboarding time, association
time, SNR (signal-to-noise ratio), and RSSI (received signal
strength indicator) values for wireless clients, authentication
time, connectivity and traffic patterns, and number of DNS
requests and responses. In the API documentation, you can see
that the resource providing the health status of all clients
connected to the network is /dna/intent/api/v1/client-health.
This API call requires a parameter to be specified when
performing the call. This parameter, called timestamp,
represents the UNIX epoch time in milliseconds. UNIX epoch
time is a system for describing a point in time since January 1,
1970, not counting leap seconds. It is extensively used in
UNIX and many other operating systems. The timestamp
With the information you now have, you can build the API
endpoint to process the API call:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/client-
health?timestamp=1566506489000. The authorization token
also needs to be included in the call as a value in the X-Auth-
Token header. The curl command should look as follows:
curl -X GET \
https://sandboxdnac2.cisco.com/dna/intent/api/v1/
client-
health?timestamp=1566506489000 \
-H 'X-Auth-Token:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.
eyJzdWIiOiI1Y2U3MTJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhd
XRoU291c
mNlIjoiaW50ZXJuYWwiLCJ0ZW5hbnROYW1lIjoiVE5UMCIsIn
JvbGVzIjpbI
jViNmNmZGZmNDMwOTkwMDA4OWYwZmYzNyJdLCJ0ZW5hbnRJZC
I6IjViNmNmZG-
ZjNDMwOTkwMDA4OWYwZmYzMCIsImV4cCI6MTU2NjYxODkyOCw
idXNlcm5hbWUiO
From this response, you can see that there are a total of 82
clients in the network, and the average health score for all of
them is 27. To further investigate why the health scores for
some of the clients vary, you can look into the response to the
/dna/intent/api/v1/client-detail call. This API call takes as
input parameters the timestamp and the MAC address of the
client, and it returns extensive data about the status and health
of that specific client at that specific time.
Now you can try to perform the same API call but this time
with Postman. The API endpoint stays the same:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/client-
health?timestamp=1566506489000. In this case, you are trying
to retrieve information from the API, so it will be a GET call,
and the X-Auth-Token header contains a valid token value.
Notice that the Params section of Postman gets automatically
populated with a timestamp key, with the value specified in the
URL: 1566506489000. Click Send, and if there aren’t any
errors with the API call, the body of the response should be
very similar to the one obtained previously with curl. The
Postman window for this example should look as shown in
Figure 8-9.
{
"response" : [ {
"siteId" : "global",
"scoreDetail" : [ {
"scoreCategory" : {
"scoreCategory" : "CLIENT_TYPE",
"value" : "ALL"
},
"scoreValue" : 27,
"clientCount" : 82,
"clientUniqueCount" : 82,
"starttime" : 1566506189000,
"endtime" : 1566506489000,
"scoreList" : [ ]
}, ... output omitted
}
#! /usr/bin/env python
from dnacentersdk import api
CISCO SD-WAN
Cisco SD-WAN (Software-Defined Wide Area Network) is a
cloud-first architecture for deploying WAN connectivity.
Wide-area networks have been deployed for a long time, and
many lessons and best practices have been learned throughout
the years. Applying all these lessons to software-defined
networking (SDN) resulted in the creation of Cisco SD-WAN.
An important feature of SDN is the separation of the control
plane from the data plane.
Historically, the control plane and data plane were part of the
network device architecture, and they worked together to
determine the path that the data traffic should take through the
network and how to move this traffic as fast as possible from
its source to its destination. As mentioned previously,
software-defined networking (SDN) suggests a different
approach.
vSmart: Cisco vSmart is the brains of the centralized control plane for the
overlay SD-WAN network. It maintains a centralized routing table and
centralized routing policy that it propagates to all the network Edge devices
through permanent DTLS tunnels.
vEdge: Cisco vEdge routers, as the name implies, are Edge devices that are
located at the perimeter of the fabric, such as in remote offices, data centers,
branches, and campuses. They represent the data plane and bring the whole
fabric together and route traffic to and from their site across the overlay
network.
Let’s explore the Cisco vManage REST API next. The API
documentation can be found at https://sdwan-
docs.cisco.com/Product_Documentation/Command_Reference
/Command_Reference/vManage_REST_APIs. At this link,
you can find all the information needed on how to interact
with the REST API, all the resources available, and extensive
explanations.
curl -c - -X POST -k \
https://sandboxsdwan.cisco.com:8443/j_security_ch
eck \
-H 'Content-Type: application/x-www-form-
urlencoded' \
-d 'j_username=devnetuser&j_password=Cisco123!'
# https://curl.haxx.se/docs/http-cookies.html
The status code of the response should be 200 OK, the body
should be empty, and the JSESSIONID cookie should be
stored under the Cookies tab. The advantage with Postman is
that it automatically saves the JSESSIONID cookie and reuses
it in all API calls that follow this initial authorization request.
With curl, in contrast, you have to pass in the cookie value
manually. To see an example, you can try to get a list of all the
devices that are part of this Cisco SD-WAN fabric. According
to the documentation, the resource that will return this
information is /dataservice/device. It will have to be a GET
curl -X GET -k \
https://sandboxsdwan.cisco.com:8443/dataservice/d
evice \
-H 'Cookie:
JSESSIONID=v9QcTVL_ZBdIQZRsI2V95vBi7Bz47IMxRY3XAY
A6.4
854266f-a8ad-4068-9651-d4e834384f51'
Example 8-8 List of Devices That Are Part of the Cisco SD-
WAN Fabric
Click here to view code image
{
... omitted output
"data" : [
{
"state" : "green",
"local-system-ip" : "4.4.4.90",
"status" : "normal",
"latitude" : "37.666684",
"version" : "18.3.1.1",
"model_sku" : "None",
"connectedVManages" : [
"\"4.4.4.90\""
],
"statusOrder" : 4,
While exploring the Cisco SD-WAN REST API, let’s get a list
of all the device templates that are configured on the Cisco
curl -X GET -k \
https://sandboxsdwan.cisco.com:8443/dataservice/t
emplate/device \
-H 'Cookie:
JSESSIONID=v9QcTVL_ZBdIQZRsI2V95vBi7Bz47IMxRY3XAY
A6.48
54266f-a8ad-4068-9651-d4e834384f51'
{
"data" : [
{
"templateDescription" : "VEDGE BASIC
TEMPLATE01",
"lastUpdatedOn" : 1538865915509,
"templateAttached" : 15,
"deviceType" : "vedge-cloud",
"templateId" : "72babaf2-68b6-4176-
92d5-fa8de58e19d8",
"configType" : "template",
"devicesAttached" : 0,
Next, let’s use Python to build a script that will go through the
same steps: Log in to vManage, get a list of all the devices in
the SD-WAN fabric, and get a list of all device templates
available. No SDK will be used in this case; this will help you
see the difference between this code and the Python code you
used earlier in this chapter. Since no SDK will be used, all the
API resources, payloads, and handling of data will have to be
managed individually.
#! /usr/bin/env python
import json
import requests
from requests.packages.urllib3.exceptions
import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(Inse
cureRequestWarning)
BASE_URL_STR =
'https://{}:8443/'.format(VMANAGE_IP)
DEVICE_RESPONSE = SESS.get(DEVICE_URL,
verify=False)
DEVICE_ITEMS =
json.loads(DEVICE_RESPONSE.content)['data']
print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:16s}{7:1}{8:7s}'\
.format("Host-Name", "|", "Device Model",
"|", "Device ID", \
"|", "System IP", "|", "Site ID"))
print('-'*105)
TEMPLATE_RESPONSE = SESS.get(TEMPLATE_URL,
verify=False)
TEMPLATE_ITEMS =
json.loads(TEMPLATE_RESPONSE.content)['data']
print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:16s}{7:1}{8:7s}'\
.format("Template Name", "|", "Device
Model", "|", "Template ID", \
"|", "Attached devices", "|", "Template
Version"))
print('-'*105)
The code specifies the API resource that will return a list of all
the devices in the SD-WAN fabric: dataservice/device. The
complete URL to retrieve the devices in the fabric is built on
the next line by combining the base URL with the new
resource. The DEVICE_URL variable will look like
https://sandboxsdwan.cisco.com:8443/dataservice/device.
Next, the same session that was established earlier is used to
perform a GET request to the device_url resource. The result
of this request is stored in the variable aptly named
DEVICE_RESPONSE, which contains the same JSON-
formatted data that was obtained in the previous curl and
Postman requests, with extensive information about all the
devices that are part of the SD-WAN fabric. From that JSON
data, only the list of devices that are values of the data key are
extracted and stored in the DEVICE_ITEMS variable.
Cisco UCS Manager: This section covers Cisco UCS Manager and the public
APIs that come with it.
Cisco UCS Director: This section goes over Cisco UCS Director and its APIs.
Cisco Intersight: This section introduces Cisco Intersight and its REST API
interface.
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
10. What does the Cisco Intersight REST API key contain?
1. keyId and keySecret
2. token
3. accessKey and secretKey
4. cookie
FOUNDATION TOPICS
CISCO ACI
Cisco Application Centric Infrastructure (ACI) is the SDN-
based solution from Cisco for data center deployment,
management, and monitoring. The solution is based on two
components: the Cisco Nexus family of switches and Cisco
Application Policy Infrastructure Controller (APIC).
APICs: These are the clustered fabric controllers that provide management,
application, and policy deployment for the fabric.
Tenants: Tenants represent containers for policies that are grouped for a
specific access domain. The following four kinds of tenants are currently
supported by the system:
User: User tenants are needed by the fabric administrator to cater to the
needs of the fabric users.
Infra: The infra tenant is provided by the system and can be configured
by the fabric administrator. It contains policies that manage the operation
of infrastructure resources.
Access policies: These policies control the operation of leaf switch access
ports, which provide fabric connectivity to resources such as virtual machine
hypervisors, compute devices, storage devices, and so on. Several access
policies come built in with the ACI fabric by default. The fabric administrator
can tweak these policies or create new ones, as necessary.
Fabric policies: These policies control the operation of the switch fabric ports.
Configurations for time synchronization, routing protocols, and domain name
resolution are managed with these policies.
The hierarchical policy model fits very well with the REST
API interface. As the ACI fabric performs its functions, the
API reads and writes to objects in the MIT. The API resources
represented by URLs map directly into the distinguished
names that identify objects in the MIT.
Next, let’s explore the building blocks of the Cisco ACI fabric
policies.
Shared: A subnet can be shared and exposed in multiple VRF instances in the
same tenant or across tenants as part of a shared service.
Filters are the objects that define protocols and port numbers
used in contracts. Filter objects can contain multiple protocols
and ports, and contracts can consume multiple filters.
Since the REST API matches one to one the MIT, defining the
URI to access a certain resource is important. First, you need
to define the protocol (http or https) and the hostname or IP
address of the APIC instance. Next, /api indicates that the API
is invoked. After that, the next part of the URI specifies
whether the operation will be for an MO or a class. The next
component defines either the fully qualified domain name for
MO-based queries or the class name for class-based queries.
The final mandatory part of the request is the encoding format,
which can be either XML or JSON. (The APIC ignores
Content-Type and other headers, so the method just explained
is the only one accepted.) The complete Cisco ACI REST API
documentation with information on how to use the API, all of
the API endpoints, and operations available can be found at
https://developer.cisco.com/docs/aci/.
curl -k -X POST \
https://sandboxapicdc.cisco.com/api/aaaLogin.js
on \
-d '{
"aaaUser" : {
"attributes" : {
"name" : "admin",
"pwd" : "ciscopsdt"
}
}
}'
{
"totalCount" : "1",
"imdata" : [
{
"aaaLogin" : {
"attributes" : {
"remoteUser" : "false",
"firstLoginTime" : "1572128727",
"version" : "4.1(1k)",
"buildTime" : "Mon May 13
16:27:03 PDT 2019",
"siteFingerprint" :
"Z29SSG/BAVFY04Vv",
"guiIdleTimeoutSeconds" :
Next, let’s get a list of all the ACI fabrics that are being
managed by this APIC instance. The URI for this GET
operation is
https://sandboxapicdc.cisco.com/api/node/class/fabricPod.json
, and the APIC-cookie header, are specified for authentication
purposes. The curl request should look similar to the one
curl -k -X GET \
https://sandboxapicdc.cisco.com/api/node/class/
fabricPod.json \
-H 'Cookie: APIC-
cookie=pRgAAAAAAAAAAAAAAAAAAGNPf39fZd71fV6DJWid
JoqxJmHt1Fephmw6Q0
I5byoafVMZ29a6pL+4u5krJ0G2Jdrvl0l2l9cMx/o0ciIbV
RfFZruCEgqsPg8+dbjb8kWX02FJLcw9Qpsg
98s5QfOaMDQWHSyqwOObKOGxxglLeQbkgxM8/fgOAFZxbKH
Mw0+09ihdiu7jTb7AAJVZEzYzXA=='
{
"totalCount" : "1",
"imdata" : [
{
"fabricPod" : {
"attributes" : {
"id" : "1",
"monPolDn" : "uni/fabric/monfab-
default",
curl -k -X GET \
https://sandboxapicdc.cisco.com/api/node/class/
topology/pod-1/topSystem.json \
-H 'Cookie: APIC-
cookie=pRgAAAAAAAAAAAAAAAAAAGNPf39fZd71fV6DJWid
JoqxJmHt1Fephmw6Q0
I5byoafVMZ29a6pL+4u5krJ0G2Jdrvl0l2l9cMx/o0ciIbV
{
"imdata" : [
{
"topSystem" : {
"attributes" : {
"role" : "controller",
"name" : "apic1",
"fabricId" : "1",
"inbMgmtAddr" : "192.168.11.1",
"oobMgmtAddr" : "10.10.20.14",
"systemUpTime" :
"00:04:33:38.000",
"siteId" : "0",
"state" : "in-service",
"fabricDomain" : "ACI Fabric1",
"dn" : "topology/pod-1/node-
1/sys",
"podId" : "1"
}
}
},
{
"topSystem" : {
"attributes" : {
"state" : "in-service",
From the response, we can see that this ACI fabric is made up
of four devices: an APIC, two leaf switches, and one spine
switch. Extensive information is returned about each device in
this response, but it was modified to extract and display just a
subset of that information. You are encouraged to perform the
same steps and explore the APIC REST API either using the
Cisco DevNet sandbox resources or your own instance of
APIC.
#! /usr/bin/env python
import sys
import acitoolkit.acitoolkit as aci
APIC_URL = 'https://sandboxapicdc.cisco.com'
USERNAME = 'admin'
PASSWORD = 'ciscopsdt'
# Login to APIC
SESSION = aci.Session(APIC_URL, USERNAME,
PASSWORD)
RESP = SESSION.login()
if not RESP.ok:
print('Could not login to APIC')
sys.exit()
ENDPOINTS = aci.Endpoint.get(SESSION)
print('{0:19s}{1:14s}{2:10s}{3:8s}{4:17s}
{5:10s}'.format(
"MAC ADDRESS",
"IP ADDRESS",
"ENCAP",
"TENANT",
"APP PROFILE",
"EPG"))
print('-'*80)
UCS MANAGER
Cisco Unified Computing System (UCS) encompasses most of
the Cisco compute products. The first UCS products were
released in 2009, and they quickly established themselves as
leaders in the data center compute and server market. Cisco
UCS provides a unified server solution that brings together
compute, storage, and networking into one system. While
initially the UCS solution took advantage of network-attached
storage (NAS) or storage area networks (SANs) in order to
support requirements for large data stores, with the release of
Cisco HyperFlex and hyperconverged servers, large storage
data stores are now included with the UCS solution.
DN = {RN}/{RN}/{RN}/{RN}...
Classes: Classes define the properties and states of objects in the MIT.
Methods: Methods define the actions that the API performs on one or more
objects.
Types: Types are object properties that map values to the object state.
Query methods: These methods, which include the following, are used to
obtain information on the current configuration state of an object:
Since the query methods available with the XML API can
return large sets of data, filters are supported to limit this
output to subsets of information. Four types of filters are
available:
Simple filters: These true/false filters limit the result set of objects with the
Boolean value of True or False.
Property filters: These filters use the values of an object’s properties as the
inclusion criteria in a result set (for example, equal filter, not equal filter,
greater than filter)
Modifier filter: This filter changes the results of a contained filter. Currently
only the NOT filter is supported. This filter negates the result of a contained
filter.
The whole MIT tree can be explored, and also queries for
specific DNs can be run from this interface. Additional
developer resources regarding Cisco UCS Manager can be
found on the Cisco DevNet website, at
https://developer.cisco.com/site/ucs-dev-center/.
Next, let’s explore the Cisco UCS Manager XML API. The
complete documentation of the Cisco UCS Manager
information model for different releases can be found at
https://developer.cisco.com/site/ucs-mim-ref-api-picker/. At
this site, you can find all the managed objects, all the methods,
-H 'Content-Type: application/xml' \
-d '<aaaLogin inName="ucspe"
inPassword="ucspe"></aaaLogin>'
aaaLogin specifies the method used to log in, the "yes" value
confirms that this is a response, outCookie provides the
session cookie, outRefreshPeriod specifies the recommended
cookie refresh period (where the default is 600 seconds), and
the outPriv value specifies the privilege level associated with
the account.
Next, let’s get a list of all the objects that are part of the
compute class and are being managed by this instance of Cisco
UCS Manager. In order to accomplish this, we can use the
configFindDnsByClassId method. This method finds
distinguished names and returns them sorted by class ID. The
curl command should look similar to the following one:
-H 'Content-Type: application/xml' \
-d '<configFindDnsByClassId
classId="computeItem"
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be" />'
<configFindDnsByClassId
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be"
response="yes" classId="computeItem">
<outDns>
<dn value="sys/chassis-
4/blade-8"/>
<dn value="sys/chassis-
5/blade-8"/>
<dn value="sys/chassis-
6/blade-8"/>
<dn value="sys/chassis-
6/blade-1"/>
<dn value="sys/chassis-
3/blade-1"/>
... omitted output
<dn value="sys/rack-unit-9"/>
<dn value="sys/rack-unit-8"/>
<dn value="sys/rack-unit-7"/>
<dn value="sys/rack-unit-6"/>
<dn value="sys/rack-unit-5"/>
<dn value="sys/rack-unit-4"/>
<configResolveDn dn="sys/chassis-4/blade-8"
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be" response="yes">
<outConfig>
<computeBlade
adminPower="policy" adminState="in-service"
assetTag=""
assignedToDn=""
association="none"
availability="available"
availableMemory="49152"
chassisId="4"
checkPoint="discovered"
connPath="A,B" connStatus="A,B" descr=""
discovery="complete"
discoveryStatus=""
dn="sys/chassis-4/blade-8" fltAggr="0"
fsmDescr=""
fsmFlags=""
fsmPrev="DiscoverSuccess"
fsmProgr="100" fsmRmtInvErrCode="none"
fsmRmtInvErrDescr=""
fsmRmtInvRslt=""
fsmStageDescr="" fsmStamp="2019-11-
06T04:02:03.896"
fsmStatus="nop"
fsmTry="0" intId="64508"
kmipFault="no" kmipFaultDescription=""
lc="undiscovered"
lcTs="1970-01-01T00:00:00.000"
localId="" lowVoltageMemory="not-applicable"
While interacting with the Cisco UCS Manager XML API this
way is possible, you can see that it becomes cumbersome very
quickly. The preferred way of working with the XML API is
Next, let’s explore the Cisco UCS Python SDK and see how to
connect to a Cisco UCS Manager instance, retrieve a list of all
the compute blades in the system, and extract specific
information from the returned data. The sample Python code is
built in Python 3.7.4 using version 0.9.8 of the ucsmsdk
module.
HANDLE.logout(): This method is used to log out from the Cisco UCS
Manager.
#! /usr/bin/env python
from ucsmsdk.ucshandle import UcsHandle
HANDLE = UcsHandle("10.10.20.110", "ucspe",
"ucspe")
print('{0:23s}{1:8s}{2:12s}{3:14s}
{4:6s}'.format(
"DN",
"SERIAL",
"ADMIN STATE",
"MODEL",
"TOTAL MEMORY"))
print('-'*70)
HANDLE.logout()
Create, clone, and deploy service profiles and templates for all Cisco UCS
servers and compute applications.
Manage, monitor, and report on data center components such as Cisco UCS
domains or Cisco Nexus devices.
Adding the ability to control a new type of device with Cisco UCS Director
https://Cisco_UCS_Director/app/api/rest?
formatType=json&opName=operationName&opData=oper
ationData
where
opName: This is the API operation name that is associated with the request
(for example, userAPIGetMyLoginProfile), as explored later in this chapter.
opData: This contains the parameters or the arguments associated with the
operation. Cisco UCS Director uses JSON encoding for the parameters. If an
operation doesn’t require any parameters, the empty set {} should be used.
When building the URL, escape characters should be encoded as appropriate.
curl -k -L -X GET \
For this request, the -g parameter disables the curl check for
nested braces {}, the -k or -insecure parameter allows curl to
proceed and operate even if the server uses self-signed SSL
certificates, and the -L parameter allows curl to follow the
redirects sent by the server. The URL for the request follows
the requirements discussed previously, using the /app/api/rest
endpoint to access the REST API and then passing the
formatType, opName, and opData as parameters. The HTTP
header for authentication is named X-Cloupia-Request-Key
and contains the value of the access key for the admin user for
the Cisco UCS Director instance that runs on the server with
IP address 10.10.10.66. The response from this instance of
Cisco UCS Director looks as shown in Example 9-13.
{
"opName" : "userAPIGetMyLoginProfile",
"serviceName" : "InfraMgr",
"serviceResult" : {
"email" : null,
"groupName" : null,
"role" : "Admin",
"userId" : "admin",
"groupId" : 0,
"firstName" : null,
"lastName" : null
},
"serviceError" : null}
curl -k -L -X GET \
-g 'http://10.10.10.66/app/api/rest?
Notice that the name of the workflow is passed in the API call
in the param0 parameter and also that VMware OVF
Deployment is encoded, using single quotation marks and
spaces between the words. Example 9-15 shows a snippet of
the response.
{
"serviceResult" : {
"details" : [
{
"inputFieldValidator" :
"VdcValidator",
"label" : "vDC",
"type" : "vDC",
CISCO INTERSIGHT
The Cisco Intersight platform provides intelligent cloud-
powered infrastructure management for Cisco UCS and Cisco
HyperFlex platforms. Cisco UCS and Cisco HyperFlex use
model-based management to provision servers and the
associated storage and networking automatically. Cisco
Intersight works with Cisco UCS Manager and Cisco
Integrated Management Controller (IMC) to bring the model-
It makes it possible to scale across data center and remote locations without
additional complexity.
Prope Description
rty
Name
Mo The time when the managed object was last modified. ModTime is
dTi automatically updated whenever at least one property of the
me managed object is modified.
https://intersight.com/path[?query]
query: An optional query after the question mark and typically used to limit
the output of the response to only specific parameters
https://intersight.com/api/v1/asset/DeviceRegistrations/486
01f85ae74b80001aee589
API keys
Session cookies
#! /usr/bin/env python
from intersight.intersight_api_client import
IntersightApiClient
from intersight.apis import
print('{0:35s}{1:40s}{2:13s}{3:14s}'.format(
"DN",
"MODEL",
"SERIAL",
"OBJECT TYPE"))
print('-'*105)
The first two lines of Example 9-16 use the import keyword
to bring in and make available for later consumption the
IntersightApiClient Python class that will be used to create a
connection to the Cisco Intersight platform and the
equipment_device_summary_api file, which contains
host: This parameter specifies the Cisco Intersight REST API base URI.
private_key: This parameter specifies the path to the file that contains the
keySecret of the Intersight account that will be used to sign in.
api_key_id: This parameter contains the keyId of the same Intersight account.
As mentioned previously, both the keyId and keySecret are generated in the
Intersight web interface, under Settings > API keys.
Webex Teams API: This section introduces Webex Teams and the rich API
set for managing and creating applications, integrations, and bots.
Cisco Finesse: This section provides an overview of Cisco Finesse and API
categories, and it provides sample code and introduces gadgets.
Cisco Finesse
Webex Devices 8, 9
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
FOUNDATION TOPICS
Unified Communications
People work together in different ways. And they use a lot of
collaboration tools: IP telephony for voice calling, web and
video conferencing, voicemail, mobility, desktop sharing,
instant messaging and presence, and more.
A rich user experience that includes the CiscoWebex Calling app, for mobile
and desktop users, integrated with the Cisco Webex Teams collaboration app
Support for an integrated user experience with Cisco Webex Meetings and
Webex Devices, including Cisco IP Phones 6800, 7800, and 8800 Series desk
phones and analog ATAs
A smooth migration to the cloud at your pace, through support of cloud and
mixed cloud and on-premises deployments
A single customizable interface that gives customer care providers quick and
Open Web 2.0 APIs that simplify the development and integration of value-
added applications and minimize the need for detailed desktop development
expertise.
Cisco Webex
Cisco Webex is a conferencing solution that allows people to
collaborate more effectively with each other anytime,
anywhere, and from any device. Webex online meetings are
truly engaging with high-definition video. Webex makes
online meetings easy and productive with features such as
document, application, and desktop sharing; integrated
audio/video; active speaker detection; recording; and machine
learning features.
Webex Share: The new Webex Share device allows easy, one-click wireless
screen sharing from the Webex Teams software client to any external display
with an HDMI port.
Cisco Headset 500 Series: These headsets deliver surprisingly vibrant sound
for open workspaces. Now users can stay focused in noisy environments with
rich sound, exceptional comfort, and proven reliability. The headsets offer a
lightweight form factor designed for workers who spend a lot of time
collaborating in contact centers and open workspaces. With the USB headset
adapter, the 500 Series delivers an enhanced experience, including automatic
software upgrades, in-call presence indicator, and audio customizations that
allow you to adjust how you hear the far end and how they hear you.
Webex Teams
Endpoints
Getting started with the Webex APIs is easy. These APIs allow
developers to build integrations and bots for Webex Teams.
APIs also allow administrators to perform administrative
tasks.
Administer the Webex Teams platform for an organization, add user accounts,
and so on
Get Webex Teams space history or be notified in real time when new messages
are posted by others
API Authentication
There are four ways to access the Webex Teams APIs:
Bots
Guest issuers
https://0.0.0.0:8080/?
code=NzAwMGUyZDUtYjcxMS00YWM4LTg3ZDYtNzdhM
DhhNWRjZGY5NGFmMjA3ZjEtYzRk_PF84_1eb65fdf-
9643-417f-9974-ad72cae0e10f&state=set_state_here
Access Scopes
Scope Description
The following sections examine some of the APIs you can use
to create rooms, add people, and send messages in a room.
Organizations API
Note
The host name https://api.ciscospark.com has now been
changed to https://webexapis.com. The old
https://api.ciscospark.com will continue to work.
Teams API
A team is a group of people with a set of rooms that is visible
For example, say that you want to use the Teams API to create
a new team named DevNet Associate Certification Room. To
do so, you use the POST method and the API
https://webexapis.com/v1/teams.
You can use a Python request to make the REST call. Example
10-1 shows a Python script that sends a POST request to create
a new team. It initializes variables such as the base URL, the
payload, and the headers, and it calls the request.
import json
import requests
URL = "https://webexapis.com/v1/teams"
PAYLOAD = {
"name": "DevNet Associate Certification
Team"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjM
xNmI0ND-
VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
print(RESPONSE.text)
Rooms API
Rooms are virtual meeting places where people post messages
and collaborate to get work done. The Rooms API is used to
manage rooms—to create, delete, and rename them. Table 10-
5 lists the operations that can be performed with the Rooms
API.
You can use the Rooms API to create a room. When you do,
an authenticated user is automatically added as a member of
the room. To create a room, you can use the POST method and
the
API https://webexapis.com/v1/rooms.
URL = "https://webexapis.com/v1/rooms"
PAYLOAD = {
"title": "DevAsc Team Room"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjM
xNmI0ND-
VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))
$ python3 CreateRoom.py
{'created': '2020-02-15T23:13:35.578Z',
'creatorId':
'Y2lzY29zcGFyazovL3VzL1BFT1BMRS8wYWZmMmFhNC1mN2
IyLTQ3MWU-
tYTIzMi0xOTEyNDgwYmDEADB',
'id':
'Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC
0xMWVhLWJiZWItYmY1MWQyNGRm
MTU0',
'isLocked': False,
'lastActivity': '2020-02-15T23:13:35.578Z',
'ownerId': 'consumer',
You can use the Rooms API to get a list of all the rooms that
have been created. To do so, you can use the GET method and
the API https://webexapis.com/v1/rooms.
$ curl -X GET \
https://webexapis.com/v1/rooms \
-H 'Authorization: Bearer
DeadBeefMTAtN2UzZi00YjRiLWIzMGEtMThjMzliNWQwZGE
yZTljN-
WQxZTktNTRl_PF84_1eb65fdf-9643-417f-9974-
ad72cae0e10f'
Memberships API
A membership represents a person’s relationship to a room.
You can use the Memberships API to list members of any
room that you’re in or create memberships to invite someone
to a room. Memberships can also be updated to make someone
a moderator or deleted to remove someone from the room.
Table 10-6 lists the operations that can be performed with
respect to the Memberships API, such as listing memberships
import json
import requests
import pprint
URL = "https://webexapis.com/v1/memberships"
PAYLOAD = {
"roomId" :
"Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC
0xMWVhLWJiZ-
WItYmY1MWQyNGRDEADB",
"personEmail": "newUser@devasc.com",
"personDisplayName": "Cisco DevNet",
"isModerator": "false"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjM
xNmI0ND-
VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))
Messages API
Messages are communications that occur in a room. In Webex
Teams, each message is displayed on its own line, along with a
timestamp and sender information. You can use the Messages
API to list, create, and delete messages.
API https://webexapis.com/v1/messages.
import json
import requests
import pprint
URL = "https://webexapis.com/v1/messages"
PAYLOAD = {
"roomId" :
"Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC
0xMWVhLWJiZ-
WItYmY1MWQyNGRmMTU0",
"text" : "This is a test message"
}
HEADERS = {
"Authorization": "Bearer
NDkzODZkZDUtZDExNC00ODM5LTk0YmYtZmY4NDI0ZTE5ZDA
1MGI-
5YTY3OWUtZGYy_PF84_consumer",
"Content-Type": "application/json",
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))
Bots
A bot (short for chatbot) is a piece of code or an application
that simulates a human conversation. Users communicate with
a bot via the chat interface or by voice, just as they would talk
to a real person. Bots help users automate tasks, bring external
content into the discussion, and gain efficiencies. Webex
Teams has a rich set of APIs that make it very easy and simple
for any developer to add a bot to any Teams room. In Webex,
bots are similar to regular Webex Teams users. They can
Type Description
Flint: Flint is an open-source bot framework with support for regex pattern
matching for messages and more.
Guest Issuer
import base64
import time
import math
import jwt
SECRET = base64.b64decode('GUEST_ISSUE_SECRET')
print(TOKEN.decode('utf-8'))
HEADERS = {
'Authorization': 'Bearer ' +
TOKEN.decode('utf-8')
Java (spark-java-sdk): A Java library for consuming the RESTful APIs (by
Cisco Webex)
Python (Webexteamssdk): An SDK that works with the REST APIs in native
Python (by cmlccie)
SDK for Android: Integrates messaging and calling into Android apps (by
Cisco Webex)
SDK for iOS: Integrates messaging and calling into iOS apps (by Cisco
Webex)
SDK for Windows: Integrates messaging and calling into Windows apps (by
Cisco Webex)
Widgets: Provides components that mimic the web user experience (by Cisco
Webex)
CISCO FINESSE
State Description
NOT The agent is signed in but not ready to take calls. It could be on a
_RE break, or the shift might be over, or the agent might be in between
ADY calls.
RES This is a transient state, as the agent gets chosen but has not
ERV answered the call.
ED
User
Dialog
Queue
Team
ClientLog
Single Sign-On
TeamMessage
http://<FQDN>:<port>/finesse/api/<object>/<objectID>
GET: Retrieves a single object or list of objects (for example, a single user or
list of users).
PUT: Replaces a value in an object (for example, to change the state of a user
from NOT_READY to READY).
API Authentication
All Finesse APIs use HTTP BASIC authentication, which
requires the credentials to be sent in the authorization header.
"Basic ZGV2YXNjOnN0cm9uZ3Bhc3N3b3Jk"
"Bearer <authtoken>"
<User>
<state>LOGIN</state>
<extension>5250001</ext
ension>
</User>
LOGOUT
</User>
A full list of all User state change APIs with details can be
found at
https://developer.cisco.com/docs/finesse/#!userchange-agent-
state/userchange-agent-state.
URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001"
PAYLOAD = (
"<User>" +
" <state>LOGIN</state>" +
" <extension>6001</extension>" +
"</User>"
)
HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
}
RESPONSE = requests.request("PUT", URL,
data=PAYLOAD, headers=HEADERS)
print(RESPONSE.text)
print(RESPONSE.status_code)
As another example, the User State Change API lets the users
change their state. State changes could be any one as shown in
Table 10-9. Say that you use the following information with
this API:
URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001"
PAYLOAD = (
"<User>" +
" <state>READY</state>" +
"</User>"
)
HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
}
RESPONSE = requests.request("PUT", URL,
data=PAYLOAD, headers=HEADERS)
print(RESPONSE.text)
print(RESPONSE.status_code)
import requests
url = "https://hq-
uccx.abc.inc:8445/finesse/api/Team/2"
headers = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'cache-control': "no-cache",
}
response = requests.request("GET", url,
headers=headers)
print(response.text)
Dialog APIs
<Dialog>
<requestedAction>MAKE_CALL</r
equestedAction>
<fromAddress>6001</fromAddress>
<toAddress>6002</toAddress>
</Dialog>
<Dialog>
<requestedAction>START_RECOR
DING</requestedAction>
</Dialog>
URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001/Dia
logs"
PAYLOAD = (
"<Dialog>" +
"
<requestedAction>MAKE_CALL</requestedAction>" +
" <fromAddress>6001</fromAddress>" +
" <toAddress>6002</toAddress>" +
"</Dialog>"
)
HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
'cache-control': "no-cache",
Finesse Gadgets
As indicated earlier in this chapter, the Finesse desktop
application is an OpenSocial gadget container. This means that
an agent or anyone else can customize what is on the desktop.
Gadgets are built using HTML, CSS, and JavaScript. A gadget
is a simple but powerful tool that allows an agent to quickly
send one or more web pages (carefully curated by the client
teams themselves) to their caller over Instant Messenger or via
email. This feature automates the four or five manual steps
usually associated with this task.
Web applications
Other gadgets
XML API: If more advanced integration is needed than is possible with the
URL API, Cisco strongly recommends using the Webex Meetings XML API.
The XML API is a comprehensive set of services that supports most aspects of
Webex Meetings services, including detailed user management,
comprehensive scheduling features, and attendee management and reporting.
The Webex XML API uses a service-oriented architecture (SOA) to provide
comprehensive services to external applications wishing to interact with one or
more Webex services.
Teleconference Service Provider (TSP) API: The TSP API provides full-
Authentication
There are three methods by which a user can interact with the
Webex Meetings via APIs:
User accounts: A system account can perform functions only for its own
accounts. This method requires the username and password to be passed via
XML.
<body>
<bodyContent
xsi:type="java:com.Webex.serv
ice.binding.meeting.CreateMeet
ing">
<metaData>
<confName>Brandin
g Meeting</confName>
</metaData>
<schedule>
<startDate/>
</schedule>
</bodyContent>
curl -X POST \
https://api.Webex.com/WBXService/XMLService \
-H 'cache-control: no-cache' \
-H 'content-type: application/xml' \
-d '<?xml version="1.0" encoding="UTF-8"?>
<serv:message
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance">
<header>
<securityContext>
<WebexID>devasc</WebexID>
<password>Kc5Ac4Ml</password>
<siteName>apidemoeu</siteName>
</securityContext>
</header>
<body>
<bodyContent
xsi:type="java:com.Webex.service.binding.meetin
g.
CreateMeeting">
<metaData>
<confName>Branding
Meeting</confName>
</metaData>
<schedule>
<startDate/>
</schedule>
</bodyContent>
<bodyContent
xsi:type="java:com.Webex.service.bind
ing.meeting.LstsummaryMeeting">
<order>
<orderBy>STARTTIME</orderBy>
</order>
</bodyContent>
</body>
curl -X POST \
https://api.Webex.com/WBXService/XMLService \
-H 'cache-control: no-cache' \
-H 'content-type: application/xml' \
-d '<serv:message
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance">
<header>
<securityContext>
<WebexID>devasc</WebexID>
<password>Kc5Ac4Ml</password>
<siteName>apidemoeu</siteName>
</securityContext>
</header>
<body>
<bodyContent
xsi:type="java:com.Webex.service.binding.meetin
g.LstsummaryMeeting">
<order>
<orderBy>STARTTIME</orderBy>
</order>
</bodyContent>
</body>
</serv:message>'
xsi:type="java:com.Web
ex.service.binding.meeting.
SetMeeting">
<meetingkey>62557960
4</meetingkey>
<participants>
<attendees>
<attendee>
<person>
<email>student@d
evasc.com</email>
</person>
</attendees>
</bodyContent>
curl -X POST \
https://api.Webex.com/WBXService/XMLService \
-H 'cache-control: no-cache' \
-H 'content-type: application/xml' \
-d '<serv:message
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance">
<header>
<securityContext>
<WebexID>devasc</WebexID>
<password>Kc5Ac4Ml</password>
<siteName>apidemoeu</siteName>
</securityContext>
</header>
<body>
<bodyContent
xsi:type="java:com.Webex.service.binding.meetin
g.SetMeeting">
<meetingkey>625579604</meetingkey>
<participants>
<attendees>
<attendee>
Deleting a Meeting
The DelMeeting API allows hosts to delete a meeting that is
not currently in progress. The API continues to use the POST
method, but the XML data contains the operation of deleting
the meeting. Table 10-17 shows the XML data that needs to be
sent in order to delete a meeting.
xsi:type="java:com.Webex.servi
ce.binding.meeting.DelMeeting">
<meetingKey>625579604</mee
tingKey>
</bodyContent>
</body>
curl -X POST \
https://api.Webex.com/WBXService/XMLService \
-H 'cache-control: no-cache' \
-H 'content-type: application/xml' \
-d '<serv:message
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance">
<header>
<securityContext>
<WebexID>devasc</WebexID>
<password>Kc5Ac4Ml</password>
<siteName>apidemoeu</siteName>
</securityContext>
</header>
<body>
<bodyContent
xsi:type="java:com.Webex.service.binding.meetin
WEBEX DEVICES
Webex Devices enables users to communicate and work with
each other in real time. The collaboration devices can be
placed in meeting rooms or at desks. Except for the high-
quality audio and video support, these devices are fully
programmable. Through embedded APIs (often referenced as
xAPI), you can extend and leverage Webex Room and Desk
device capabilities in several ways:
Deploy code onto the devices without needing to deploy external control
systems
Room Devices: Room 55, Room 70, Room 70 Dual, Board 55/55S, Board
70/70S, Board 85S, Cisco TelePresence MX200 G2, MX300 G2, MX700,
MX800, MX800 Dual, SX10, SX20, and SX80
Webex Desk Device: Cisco Webex DX80 and DX70 Collaboration Endpoint
Software version 9
xAPI
Commands
Configurations
Status
Events
xAPI Authentication
Access to xAPI requires the user to authenticate using HTTP
basic access authentication as a user with the ADMIN role.
Unauthenticated requests prompt a 401 HTTP response
containing a basic access authentication challenge.
Creating a Session
You create a session by sending an xAPI session request to the
endpoint. Example 10-18 shows a simple Python POST
request, to which the server responds with the session cookie.
$ python3 sess.py
SessionId=031033bebe67130d4d94747b6b9d4e4f6bd29
65162ed542f22d32d75a9f238f9; Path=/;
HttpOnly
import requests
URL = "http://10.10.20.159/status.xml"
HEADERS = {
'Cookie':
"SessionId=c6ca2fc23d3f211e0517d4c603fbe4205c77
d13dd6913c7bc12eef4085b
7637b"
}
import requests
URL = "http://10.10.20.159/put.xml"
PAYLOAD = (
'<Command>' +
' <Camera>' +
' <PositionSet command="True">' +
HEADERS = {
'Content-Type': "application/xml",
'Cookie':
"SessionId=c6ca2fc23d3f211e0517d4c603fbe4205c77
d13dd6913c7bc12eef4085b
7637b"
}
import requests
URL = "http://10.10.20.159/put.xml"
PAYLOAD = (
'<Command>' +
' <HttpFeedback>' +
' <Register command="True">' +
' <FeedbackSlot>1</FeedbackSlot>' +
' <ServerUrl>http://127.0.0.1/devasc-
webhook</ServerUrl>' +
' <Format>JSON</Format>' +
' <Expression
item="1">/Configuration</Expression>' +
' <Expression
item="2">/Event/CallDisconnect</Expression>' +
' <Expression
item="3">/Status/Call</Expression>' +
HEADERS = {
'Content-Type': "application/xml",
'Cookie':
"SessionId=c6ca2fc23d3f211e0517d4c603fbe4205c77
d13dd6913c7bc12eef4
085b7637b,SessionId=c6ca2fc23d3f211e0517d4c603f
be4205c77d13dd6913c7bc12eef40
85b7637b;
SessionId=c6ca2fc23d3f211e0517d4c603fbe4205c77d
13dd6913c7bc12eef4085b7
637b"
}
import requests
URL = "http://10.10.20.159/put.xml"
PAYLOAD = (
'<Configuration>' +
' <RoomAnalytics>'
'
<PeoplePresenceDetector>On</PeoplePresenceDetec
tor>' +
' </RoomAnalytics>' +
'</Configuration>'
)
HEADERS = {
'Content-Type': "application/xml",
'Cookie':
"SessionId=c6ca2fc23d3f211e0517d4c603fbe4205c77
d13dd6913c7bc12eef4085b
7637b"
}
print(RESPONSE.text)
Provisioning interfaces
Serviceability interfaces
Administrative XML
Unified CM groups
Device pools
Device profiles
Dial plans
Directory numbers
Locations
MGCP devices
Phones
Process nodes
Regions
Route filters
Route groups
Route lists
Route partitions
Service parameters
Translation patterns
Users
Voicemail ports
When you download the AXL Toolkit and unzip the file, the
schema folder contains AXL API schema files for supported
AXL versions:
These XML schema files contain full details about the AXL
API format, including the request names, fields/elements, data
types used, and field validation rules. One advantage of .xsd
schema files is that they can be used to
automatically/programmatically validate a particular XML
document against the schema to ensure that it is well formatted
and valid according to the schema specs.
urllib3.disable_warnings(InsecureRequestWarning
)
USERNAME = 'administrator'
PASSWORD = 'ciscopsdt'
IP_ADDRESS = "10.10.20.1"
WSDL = 'schema//12.0//AXLAPI.wsdl'
BINDING_NAME = "
def main():
""" Main """
session = Session()
session.verify = False
session.auth = HTTPBasicAuth(USERNAME,
PASSWORD)
transport = Transport(cache=SqliteCache(),
session=session, timeout=60)
client = Client(wsdl=WSDL,
transport=transport)
axl = client.create_service(BINDING_NAME,
ADDRESS)
update_phone_by_name(axl,
"SEP001122334455", "DevAsc: adding new Desc")
print(get_phone_by_name(axl,
"SEP001122334455"))
if __name__ == '__main__':
main()
CUCM = '10.10.20.1'
CUCM_USER = "administrator"
CUCM_PASSWORD = "ciscopsdt"
CUCM_VERSION = '12.0'
ucm =
axl(username=CUCM_USER,password=CUCM_PASSWORD,c
ucm=CUCM,cucm_version=CUCM_
VERSION)
print (ucm)
for phone in ucm.get_phones():
print(phone.name)
for user in ucm.get_users():
print(user.firstName)
Paragraph xAPI 29
0
Unified Communications
Finesse
JSON Web Token (JWT)
voice over IP (VoIP)
Extensible Messaging and Presence Protocol (XMPP)
Bidirectional-streams Over Synchronous HTTP (BOSH)
computer telephony integration (CTI)
Cisco Umbrella: This section introduces the Cisco Umbrella product and the
relevant APIs.
Cisco Threat Grid: This section provides an overview of Cisco Threat Grid
and Threat Grid APIs.
Mitigation
Cisco Umbrella
Cisco Umbrella 1, 2
Cisco Firepower 3, 4
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
FOUNDATION TOPICS
ThreatDescription
Phis This type of exploit involves using emails or web pages to procure
hin sensitive information, such as usernames and passwords.
g
Bru Brute-force methods, which involve using trial and error to decode
te- data, can be used to crack passwords and crack encryption keys.
forc Other targets include API keys, SSH logins, and Wi-Fi passwords.
e
atta
ck
CISCO UMBRELLA
Understanding Umbrella
Cisco Umbrella processes DNS requests received from users
or devices on the networks. It not only works on HTTP or
HTTPS but supports other protocols as well.
API Description
Inv The RESTful API allows the querying of the Umbrella DNS
esti database and to show security events and correlations related to the
gat domain queried.
e
AP
I
Authentication
All Cisco Umbrella APIs use HTTP-basic authentication. The
key and secret values need to be Base64 encoded and sent as
part of a standard HTTP basic Authorization header. API
requests are sent over HTTPS. APIs require the credentials to
be sent in the Authorization header. The credentials are the
username and password, separated by a colon (:), within a
Base64-encoded string. For example, the Authorization header
would contain the following string:
"Basic ZGV2YXNjOnN0cm9uZ3Bhc3N3b3Jk"
The Management API can use the ISP and MSSP endpoints
passing the Base64-encoded authorization in the header (see
Example 11-2).
ISP and MSSP endpoints: The API returns the service provider IDs and
customer IDs. These endpoints are for Internet service providers (ISPs),
managed service providers (MSPs) using the Master Service license, managed
security service providers (MSSPs), and partner console users. To perform
these queries, you must have your service provider ID (SPId) from your
console’s URL.
Networks: The API returns network records and deletes networks. Note that
parent organizations do not have networks. To create and manage networks on
behalf of child organizations, use the organizations/customerID/networks
endpoints.
Internal networks: The API creates, updates, and deletes internal networks
and returns internal network records.
Internal domains: The API creates, updates, and deletes internal domains and
returns internal domain records.
Virtual appliances: The API returns virtual appliance (VA) records and
updates or deletes VAs. Note that you cannot create a virtual appliance through
the API. A VA must be created within your hypervisor and must be registered
as an identity within Umbrella before the API can manage it.
Umbrella sites: The API creates, updates, and deletes sites and returns site
records.
Users: The API creates and deletes users and returns user records.
Destination lists: The API creates, reads, updates, and deletes destination
lists.
import json
import requests
url = "https://s-
platform.api.opendns.com/1.0/events"
querystring = {"customerKey":"XXXXXXX-YYYY-
ZZZZ-YYYY-XXXXXXXXXXXX"}
payload = [
{
"alertTime": "2020-01-01T09:33:21.0Z",
"deviceId": "deadbeaf-e692-4724-ba36-
c28132c761de",
"deviceVersion": "13.7a",
"dstDomain": "looksfake.com",
"dstUrl":
"http://looksfake.com/badurl",
"eventTime": "2020-01-01T09:33:21.0Z",
"protocolVersion": "1.0a",
"providerName": "Security Platform"
}
]
headers = {
'Content-Type': "text/plain",
'Accept': "*/*",
'Cache-Control': "no-cache",
'Host': "s-platform.api.opendns.com",
'Accept-Encoding': "gzip, deflate",
'Connection': "keep-alive",
'cache-control': "no-cache"
}
response = requests.request(
"POST",
url,
data=json.loads(payload),
print(response.text)
Once domains are placed in the list, a customer can get the list
of the domains by using the GET method for https://s-
platform.api.opendns.com/1.0/domains endpoint. Example 11-
4 shows an example of a simple Python requests method.
import requests
url = "https://s-
platform.api.opendns.com/1.0/domains"
querystring = {"customerKey":"XXXXXXX-YYYY-
ZZZZ-YYYY-XXXXXXXXXXXX"}
response = requests.request("POST", url,
headers=headers, params=querystring)
print(response.text)
{
"meta":{
"page":1,
"limit":200,
"prev":false,
"next":"https://s-
platform.api.opendns.com/1.0/
domains?customerKey=XXXXXXX-YYYY-ZZZZ-
YYYY-XXXXXXXXXXXX&page=2&limit=200"
},
"data":[
{
"id":1,
"name":"baddomain1.com"
},
{
"id":2,
"name":"looksfake.com"
},
{
"id":3,
"name":"malware.dom"
}
]
}
import requests
url = "https://s-
platform.api.opendns.com/1.0/domains/looksfake.
com"
querystring = {"customerKey":"XXXXXXX-YYYY-
ZZZZ-YYYY-XXXXXXXXXXXX"}
The following are some of the tasks that can be done via the
Umbrella Investigate REST API:
Find a historical record for this domain or IP address in the DNS database to
see what has changed.
Query large numbers of domains quickly to find out whether they’re scored as
malicious and require further investigation.
Scoring: Several scores help rate the potential risk of the domain/IP address.
For example:
WHOIS record data: This category includes the email address used to
Cooccurrences: This category depicts other domains that were queried right
before or after a given domain and are likely related. The Investigate API is
often used to uncover other domains that may be related to the same attack but
are hosted on completely separate networks.
import requests
url =
"https://investigate.api.umbrella.com/domains/c
ategorization/cisco.com"
querystring = {"showLabels":""}
headers = {
'authorization': "Bearer deadbeef-24d7-
40e1-a5ce-3b064606166f",
'cache-control': "no-cache",
}
response = requests.request("GET", url,
headers=headers, params=querystring)
print(response.text)
{
"cisco.com": {
"status": 1,
"security_categories": [],
"content_categories": [
"Software/Technology",
"Business Services"
]
}
}
Table 11-4 lists other Investigate API URLs for the cisco.com
domain.
CISCO FIREPOWER
SSL and SSH inspection: NGFWs can inspect SSL- and SSH-encrypted
traffic. An NGFW decrypts traffic, makes sure the applications are allowed,
checks other policies, and then re-encrypts the traffic. This provides additional
protection against malicious apps and activities that try to hide by using
encryption to avoid the firewall.
ISE integration: NGFWs have the support of Cisco ISE. This integration
allows authorized users and devices to use specific applications.
Authentication
The Firepower APIs are already part of the FMC software by
default, and the only thing that is required is to enable them
via the UI. The Firepower APIs use token-based authentication
for API users. Consider the simple example shown in Example
11-9. It uses the Python requests command to make the REST
call, the POST method, and the API
https://fmcrestapisandbox.cisco.com/api/fmc_platform/v1/auth
/generatetoken.
url =
"https://fmcrestapisandbox.cisco.com/api/fmc_pl
atform/v1/auth/generatetoken"
headers = {
'Content-Type': "application/xml",
'Authorization': "Basic
YXNodXRvc2g6V0JVdkE5TXk=",
}
response = requests.request("POST", url,
headers=headers)
print(response.headers)
System Information
Example 11-10 Python Code to Get the Server Version via the
Firepower Management Center AP
Click here to view code image
Now that you know the basics of accessing the API, you can
explore all the APIs that Firepower Management Center has to
offer.
Object Types
Table 11-5 lists the objects you can create in the Firepower
system and indicates which object types can be grouped.
Network Yes
Port Yes
Security zone No
Application filter No
URL Yes
Geolocation No
Variable set No
Sinkhole No
File list No
Community list No
Creating a Network
import json
import requests
network_lab = {
"name": "labnetwork-1",
"value": "10.10.10.0/24",
"overridable": False,
"description": "Lab Network Object",
"type": "Network"
}
netpath = "/api/fmc_config/v1/domain/" +
uuid + "/object/networks"
url = server + netpath
print("-------------------")
print(headers)
try:
response = requests.post(url,
data=json.dumps(network), headers=headers,
verify=False)
status_code = response.status_code
resp = response.text
json_response = json.loads(resp)
print("status code is: " +
str(status_code))
def generateSessionToken():
""" Generate a new session token using the
username and password """
global uuid
global headers
tokenurl =
"/api/fmc_platform/v1/auth/generatetoken"
url = server + tokenurl
response = requests.request(
"POST",
url,
headers=headers,
auth=requests.auth.HTTPBasicAuth(username,
password),
verify=False
)
print(response.headers)
status_code = response.status_code
if status_code == 201 or status_code ==
202:
print("Successfully network created")
else:
response.raise_for_status()
print(domains)
print(uuid)
print(headers)
Ingest events: The API stores events in third-party tools, archives extended
event histories, and correlates against other logs.
Search: The API can find where a file has been, determine if a file has been
executed, and capture command-line arguments.
Basic management: The API allows you to create groups, move desktops or
computers, and manage file lists.
Using the API client ID and key, you can now make the API
calls as follows:
https://<clientID>:<clientKEY>@<api_endpoint>
Also, you can use basic HTTP authentication encoding for the
client ID and key and the Authorization header. For the client
ID and key generated, the credential is Base64 encoded as
"ZGVhZGJlZWYxMjM0NDhjY2MwMGQ6WFhYWFhY
WFgtWVlZWS1aWlpaLTAwMDAtZTM4NGVmMmR4e
Hh4", and the header looks as follows:
Authorization: Basic
ZGVhZGJlZWYxMjM0NDhjY2MwMGQ6WFhYWFhYWFgt
WVlZWS1aWlpaLTAwMDAtZTM4NGVmMmR4eHh4
This API requires the basic authentication headers and uses the
GET method. Example 11-13 shows a Python requests
command that uses this API.
{
"version": "v1.2.0",
"metadata": {
"links": {
"self":
"https://api.amp.cisco.com/v1/vulnerabilities?
offset=0&limit=1"
},
"results": {
"total": 1,
"current_item_count": 1,
"index": 0,
"items_per_page": 1
}
},
"data": [
{
"application": "Adobe Flash Player",
"version": "11.5.502.146",
"file": {
"filename": "FlashPlayerApp.exe",
"identity": {
"sha256":
"c1219f0799e60ff48a9705b63c14168684aed911610fec
68548ea08f
605cc42b"
Table 11-6 shows all other APIs that AMP for Endpoints has
to offer.
It is a simple policy management engine that is centralized and can grant user
access.
Compliance based: Policies can ensure that endpoints have all software
patches before they are granted full access.
Endpoints
Identity groups
Portals
Profiler policies
Network devices
Security groups
External RESTful Services Admin: For full access to all ERS methods
(GET, POST, DELETE, PUT).
"Basic ZGV2YXNjOnN0cm9uZ3Bhc3N3b3Jk"
import base64
encoded =
base64.b64encode('devasc:strongpassword'.encode
('UTF-8')).decode('ASCII')
print(encoded)
Data - {
"EndPointGroup" : {
Method: POST
URL:
Data - {
"ERSEndPoint" : {
"name" : "DevNet_Endpoint",
"description" : "DevNet Endpoint-1",
"mac" : "FF:EE:DD:03:04:05",
"staticGroupAssignment" : true
}
This API uses the group ID from the header and requires basic
authentication headers. Example 11-17 shows a Python
Location:
https://ise.devnetsandbox.com:9060/ers/config/end
point/
deadbeef-1111-2222-3333-444444444444
https://panacea.threatgrid.com/api/<ver>/<api-endpoint>?q=
<query>&api_key=apikey
This API key is used in every API call that is made to Threat
Grid.
Who Am I
To see if the API key is working, you can use the GET method
and the API
https://panacea.threatgrid.com/api/v3/session/whoami. You
need to pass the API key as a query parameter, as shown in
Example 11-18.
{
"api_version": 3,
"id": 1234567,
"data": {
"role": "user",
"properties": {},
"integration_id": "z1ci",
"email": "devasc@student.com",
"organization_id": 666777,
"name": "devasc",
"login": "devasc",
"title": "DevNet Associate",
"api_key": " deadbeefelcpgib9ec0909",
"device": false
}
}
{
"api_version": 2,
"id": 4482656,
"data": {
"index": 0,
"total": 1,
"took": 3956,
"timed_out": false,
Feeds
Pre- No No Yes
whiteliste
d
Say that you want to retrieve all the curated feeds via API. The
curated feed types are shown in Table 11-8.
headers = {
'cache-control': "no-cache",
'Content-type': 'application/json',
'Accept': 'application/json'
}
print(response.text)
threat
vulnerability
Model-Driven Programmability
This chapter covers the following topics:
NETCONF: This section introduces NETCONF—what it is, why it has been
developed, and how to use it.
YANG: This section covers YANG, YANG data models, and how these data
models apply to networking.
NETCONF 1–3
YANG 4–6
Model-Driven Telemetry 10
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
FOUNDATION TOPICS
Traditionally, network devices were managed almost
exclusively through command-line interfaces (CLIs). For
network monitoring, Simple Network Management Protocol
(SNMP) is still widely used. While CLIs are extremely
powerful, they are also highly proprietary—different from
vendor to vendor—and human intervention is required to
understand and interpret their output. Also, they do not scale
for large network environments. SNMP also has limitations, as
discussed later in this chapter. A common standard way of
managing and monitoring network devices was needed. Data
modeling is replacing manual configuration as it provides a
standards-based, programmatic method of writing
configuration data and gathering statistics and operational data
from devices. YANG data models have been developed
specifically to address the need for standardization and
commonality in network management. Model-driven
programmability enables you to automate the configuration
and control of network devices.
NETCONF
In an effort to better address the concerns of network
operators, the Internet Engineering Task Force (IETF) and the
Internet Architecture Board (IAB) set up a workshop on
network management in 2002. Several network operators were
invited, and workshop participants had a frank discussion
about the status of the network management industry as a
whole. The conclusions and results of that workshop were
captured in RFC 3535. Up to that point the industry had been
extensively using Simple Network Management Protocol
(SNMP) for network management. SNMP is discussed in more
detail in Chapter 18, “IP Services,” but for the purposes of this
chapter, you just need to know that SNMP is a network
management protocol that was initially developed in the late
Operation Description
<commit> Copy the candidate data store to the running data store
merge: When this attribute is specified, the configuration data is merged with
the configuration at the corresponding level in the configuration data store.
This is the default behavior.
delete: When this attribute is specified, the configuration data is deleted from
the configuration data store.
running: This data store holds the complete configuration currently active on
the network device. Only one running data store can exist on a device, and it is
always present. NETCONF protocol operations refer to this data store with the
<running> XML element.
candidate: This data store acts as a workplace for creating and manipulating
configuration data. A <commit> operation causes the configuration data
contained in it to be applied to the running data store.
startup: This data store contains the configuration data that is loaded when the
device boots up and comes online. An explicit <copy-config> operation from
the <running> data store into the <startup> data store is needed to update the
startup configuration with the contents of the running configuration.
YANG
As mentioned in RFC 6020, YANG is “a data modeling
Module-header statements describe the module and give information about it.
Definition statements are the body of the module, where the data model is
defined.
empty No value
int8/16/32/64 Integer
typedef percent {
type uint16 {
Description "Percentage":
This new type of data can then be used when building the
ipv4-address
ipv6-address
ip-prefix
domain-name
uri
mac-address
port-number
ip-version
phys-address
timestamp
date-and-time
flow-label
counter32/64
gauge32/64
import "ietf-yang-types" {
prefix yang;
}
type yang:ipv4-address;
Leaf nodes
Leaf-list nodes
Container nodes
List nodes
leaf intf-name {
type string;
<intf-name>GigabitEthernet0/0</intf-name>
leaf-list trunk-interfaces {
type string;
description "List of trunk interfaces";
}
<trunk-interfaces>TenGigabitEthernet0/1</trunk-
interfaces>
<trunk-interfaces>TenGigabitEthernet0/2</trunk-
interfaces>
<trunk-interfaces>TenGigabitEthernet0/3</trunk-
interfaces>
<trunk-interfaces>TenGigabitEthernet0/4</trunk-
interfaces>
container statistics {
description "A collection of interface
statistics.";
leaf in-octets {
type yang:counter64;
description "The total number of
octets received on the interface.";
}
leaf in-errors {
type yang:counter32;
description "Number of inbound
packets that contained errors.";
}
leaf out-octets {
type yang:counter64;
description "The total number of
octets sent out on the interface.";
}
leaf out-errors {
type yang:counter32;
description "Number of outbound
packets that contained errors.";
}
}
<statistics>
<in-errors>2578</in-errors>
<out-octets>678845633</out-octets>
<out-errors>0</out-errors>
</statistics>
The last type of node that is used for data modeling in YANG
is the list. A list defines a sequence of list entries. Each entry is
a record instance and is uniquely identified by the values of its
key leaves. A list can be thought of as a table organized around
the key leaf with all the leaves as rows in that table. For
example, the user list in Example 12-2 can be defined as
having three leaves called name, uid, and full-name, with the
name leaf being defined as the key leaf.
list user {
key name;
leaf name {
type string;
}
leaf uid {
type uint32;
}
leaf full-name {
type string;
}
}
<user>
<name>john</name>
<uid>1000</uid>
<full-name>John Doe</full-name>
</user>
<user>
<name>jeanne</name>
<uid>1001</uid>
<full-name>Jeanne Doe</full-name>
</user>
// Contents of "bogus-interfaces.yang"
module bogus-interfaces {
namespace
"http://bogus.example.com/interfaces";
prefix "bogus";
import "ietf-yang-types" {
revision 2020-01-06 {
description "Initial revision.";
}
container interfaces {
leaf intf-name {
type string;
description "The name of the
interface";
}
leaf-list trunk-interfaces {
type string;
description "List of trunk
interfaces";
}
container statistics {
description "A collection of
interface statistics.";
leaf in-octets {
type yang:counter64;
description "Total
number of octets received on the
interface.";
}
leaf in-errors {
type yang:counter32;
description "Number of
inbound packets that contained
errors.";
leaf out-octets {
type yang:counter64;
description "Total
number of octets sent out on the
interface.";
}
leaf out-errors {
type yang:counter32;
description "Number of
outbound packets that contained
errors.";
}
}
list user {
key name;
leaf name {
type string;
}
leaf uid {
type uint32;
}
leaf full-name {
type string;
}
}
}
}
container statistics {
description "A collection of interface
statistics.";
leaf in-octets {
type yang:counter64;
description "The total number of
octets received on the interface.";
}
leaf in-errors {
type yang:counter32;
description "Number of inbound
packets that contained errors.";
}
leaf out-octets {
type yang:counter64;
description "The total number of
octets sent out on the interface.";
}
leaf out-errors {
type yang:counter32;
description "Number of outbound
packets that contained errors.";
}
}
rpc activate-software-image {
input {
leaf image {
type binary;
}
}
output {
leaf status {
type string;
}
}
}
notification config-change {
description "The configuration change.";
leaf operator-name {
type string;
}
leaf-list change {
type instance-identifier;
}
}
The notification in Example 12-7 has two nodes: one for the
name of the operator that performed the change and one for
the list of changes performed during the configuration session.
module ietf-interfaces {
yang-version 1.1;
namespace "urn:ietf:params:xml:ns:yang:ietf-
interfaces";
prefix if;
import ietf-yang-types {
prefix yang;
}
organization
"IETF NETMOD (Network Modeling) Working
Group";
contact
"WG Web:
<https://datatracker.ietf.org/wg/netmod/>
WG List: <mailto:netmod@ietf.org>
revision 2018-02-20 {
description
"Updated to support NMDA.";
... omitted output
You can run this YANG model through pyang and specify the
tree format by using the command pyang -f tree ietf-
interfaces.yang. This results in the output shown in Example
12-9.
module: ietf-interfaces
+--rw interfaces
| +--rw interface* [name]
| +--rw name
string
| +--rw description?
string
| +--rw type
identityref
urn:ietf:params:netconf:
<hello
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</cap
ability>
</capabilities>
</hello>]]>]]>
<rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:
base:1.0">
<get-config>
<source>
</source>
</get-config>
</rpc>]]>]]>
NXOS_HOST = "10.10.10.10"
NETCONF_PORT = "830"
USERNAME = "admin"
PASSWORD = "password"
if __name__ == '__main__':
get_capabilities()
urn:ietf:params:netconf:base:1.0
urn:ietf:params:netconf:base:1.1
urn:ietf:params:netconf:capability:writable-
running:1.0
urn:ietf:params:netconf:capability:rollback-on-
error:1.0
urn:ietf:params:netconf:capability:candidate:1.
0
urn:ietf:params:netconf:capability:validate:1.1
urn:ietf:params:netconf:capability:confirmed-
commit:1.1
http://cisco.com/ns/yang/cisco-nx-os-device?
revision=2019-02-17&module=Cisco-NX-OS-
device&deviations=Cisco-NX-OS-device-deviations
#!/usr/bin/env python
""" Add a loopback interface to a device with
NETCONF """
NXOS_HOST = "10.10.10.10"
NETCONF_PORT = "830"
USERNAME = "admin"
PASSWORD = "password"
LOOPBACK_ID = "01"
LOOPBACK_IP = "1.1.1.1/32"
add_loop_interface = """<config>
<System
xmlns="http://cisco.com/ns/yang/cisco-nx-os-
device">
<intf-items>
with manager.connect(
host=NXOS_HOST,
port=NETCONF_PORT,
username=USERNAME,
password=PASSWORD,
hostkey_verify=False
if __name__ == '__main__':
add_loopback()
<rpc-reply message-id="urn:uuid:6de4444b-9193-
4b74-837b-
e3994d75a319"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
RESTCONF
According to RFC 8040, RESTCONF is “an HTTP-based
protocol that provides a programmatic interface for accessing
data defined in YANG, using the datastore concepts defined in
the Network Configuration Protocol (NETCONF).” Basically,
RESTCONF provides a REST-like interface to the
NETCONF/YANG interface model.
RESTCONF NETCONF
The HTTP GET method is sent in the RESTCONF request by the client to
retrieve data and metadata for a specific resource. It translates into the
NETCONF <get> and <get-config> operations. The GET method is supported
for all resource types except operation resources.
The HTTP POST method is used for NETCONF RPCs and to create a data
resource. It represents the same semantics as the NETCONF <edit-config>
operation with operation= “create”.
The PUT method is used to create or replace the contents of the target
resource. It is the equivalent of the NETCONF <edit-config> operation with
operation=“create/replace”.
The HTTP DELETE method is used to delete the target resource and is the
equivalent of the NETCONF <edit-config> with operation=“delete”.
https://<ADDRESS>/<ROOT>/data/<[YANG_MODULE:]
CONTAINER>/ <LEAF>[?<OPTIONS>]
where
ADDRESS is the IP address or the hostname and port number where the
RESTCONF agent is available.
ROOT is the main entry point for RESTCONF requests. Before connecting to
a RESTCONF server, the root must be determined. Per the RESTCONF
standard, devices implementing the RESTCONF protocol should expose a
resource called /.well-known/host-meta to enable discovery of ROOT
programmatically.
data is the RESTCONF API resource type for data. The operations resource
type is also available for access to RPC operations.
[?<OPTIONS>] are the options that some network devices may support that
are sent as query parameters that impact the returned results. These options are
optional and can be omitted. The following are some examples of possible
content = [all, config, nonconfig]: This query option controls the type of
data returned. If nothing is specified, the default value, all, is used.
fields = expr: This option limits what leafs are returned in the response.
To explore further down the API tree and into the YANG
model and retrieve a complete list of all the interfaces, their
status, and traffic statistics through the RESTCONF interface,
you can perform a GET request on the https://{{host}}:
{{port}}/restconf/data/ietf-interfaces:interfaces-state/
endpoint. The response received back from the API should
look similar to the one in Figure 12-8.
MODEL-DRIVEN TELEMETRY
Timely collection of network statistics is critical to ensuring
that a network performs as expected and foreseeing and
preventing any problems that could arise. Technologies such
as SNMP, syslog, and the CLI have historically been used to
gather this state information from the network. Using a pull
model to gather the data, in which the request for network data
originates from the client, does not scale and restricts
automation efforts. With such a model, the network device
sends data only when the client manually requests it. A push
model continuously streams data from the network device to
the client. Telemetry enables the push model, providing near
instantaneous access to operational data. Clients can subscribe
to specific data they need by using standard-based YANG data
models delivered over NETCONF.
Periodic notifications: These notifications are sent with a fixed rate defined in
the telemetry subscription. This data is ideal for device counters or measures
such as CPU utilization or interface statistics because of their dynamic,
always-changing nature.
On-change notifications: These notifications are sent only when the data
changes. These types of notifications might be sent, for example, for faults,
new neighbors being detected, and thresholds being crossed.
Deploying Applications
This chapter covers the following topics:
Application Deployment Models: This section discusses the public, private,
hybrid, and edge cloud application deployment models.
Docker: This section discusses Docker containers and the basic operation of
Docker.
DevOps 5, 6
Docker 7, 8
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
FOUNDATION TOPICS
APPLICATION DEPLOYMENT
MODELS
The concept of cloud has wormed its way into almost every
facet of modern life. It has become central to how we interact
with our friends, buy things, watch TV and movies, and run
our businesses. So what is it really? Well, that is a very
subjective question that has to take into account your own
personal perspective. If you asked five people what cloud is,
you would probably get eight different answers. Nearly half of
all business meetings about cloud, when it was in its infancy,
NIST DEFINITION
Essential Characteristics
Broad network access: Services are available over the network and accessed
via standard protocols and communications technologies on any type of client
device (mobile phone, tablet, desktop, and so on).
Service Models
The cloud service models mainly differ in terms of how much
control/administration the cloud customer has to perform.
Figure 13-2 shows the following service models:
APPLICATION DEPLOYMENT
Private Cloud
A private cloud is provisioned for a single organization that
may have multiple service consumers within its business units
or groups (see Figure 13-3). The organization may own the
private cloud or lease it from another entity. It also does not
have to reside on the organization’s premises and may be in
another facility owned and operated by a third party. The
following are the key characteristics of a private cloud:
Applications are deployed within a private cloud, and the organization has
complete control and responsibility for their maintenance and upkeep.
Public Cloud
A public cloud is provisioned for open utilization by the public
at large (see Figure 13-4). Anyone with a credit card can gain
access to a public cloud offering. A public cloud exists solely
on the premises of the cloud provider. The key characteristics
of a public cloud are as follows:
Hybrid Cloud
A hybrid cloud is composed of one or more cloud deployment
models (private and public, for example) and is used to extend
capabilities or reach and/or to augment capacity during peak
demand periods (see Figure 13-5). The key characteristics of a
hybrid cloud are as follows:
Community Cloud
A community cloud, as shown in Figure 13-6, is unique in that
it is provisioned for the sole utilization of a specific
community of customers, such as a school district or multiple
government agencies. Basically any group of entities that have
a common policy, security, compliance, or mission can join
together and implement a community cloud. The key
characteristics are as follows:
APPLICATION DEPLOYMENT
METHODS
What is IT’s value to the business if you boil it down to the
simplest aspect? IT is charged with supporting and
maintaining applications that the business relies on. No one
builds a network first and then looks for applications to stick
BARE-METAL APPLICATION
DEPLOYMENT
The traditional application stack, or bare-metal, deployment is
fairly well known. It’s how the vast majority of applications
have been deployed over the past 40 years. As Figure 13-8
shows, one server is devoted to the task of running a single
application. This one-to-one relationship means the application
has access to all of the resources the server has available to it.
If you need more memory or CPU, however, you have to
physically add new memory or a new processor, or you can
transfer the application to a different server.
VIRTUALIZED APPLICATIONS
Virtualization was created to address the problems of
traditional bare-metal server deployments where the server
capacity was poorly utilized. The idea with virtualization is to
build one large server and run more than one application on it.
Sounds simple, right? Many of the techniques mentioned
earlier, from the days of the mainframe, were leveraged to
create an environment where the underlying server hardware
could be virtualized. The hypervisor was created to handle all
of the time slicing and hardware simulation. This made it
possible to run various applications and operating systems at
the same time and reap the benefit of better utilization of
CONTAINERIZED APPLICATIONS
The evolution of applications to leverage microservices
architectures was a very important change that provided
developers with new ways to build massively scaled
applications with tremendous improvements in terms of
availability and fault tolerance. Microservices made available
small custom-built components, but they didn’t address the
need for an efficient infrastructure that could support them and
their dynamic nature. Containers became a very popular way
to provide an easier way to get application components
deployed in a consistent way. Imagine being able to have all of
the components and dependencies of your application loaded
in an interchangeable format that you can simply hand off to
operations to deploy. This was what containers offer. Google
created Kubernetes and made it open source to provide an
orchestration and automation framework that can be used to
operationalize this new application deployment model.
SERVERLESS
Serverless is one of the silliest names in the industry. Of
course, there is a server involved! However, this type of
application deployment mechanism is intended to make it even
easier to get an application up and running, and the term
serverless conveys this idea. You simply copy and paste your
code into a serverless instance, and the infrastructure runs your
code without any further configuration. This type of
deployment is also referred to as function as a service, and it
works best for applications that run periodically or that are
part of batch processes. When writing code, you create a
function that you call repeatedly at different places in the code
to make it more efficient. Serverless deployment uses the same
write once/use may times concept: You write some code and
then call it remotely through your application.
Easier to use and write code for: Small teams of developers can use
serverless deployment without needing to involve infrastructure teams or
acquire advanced skills.
Latency: Spin-up time from idle for the function can cause significant delays
that must be accounted for in application design.
Resource constraints: Some workloads need more resources than the service
may typically be allocated for, causing heavy charges, which may make it
cheaper to dedicate infrastructure.
Security and privacy: Most providers are built on proprietary offerings and
use shared resources. Misconfiguration can result in compromises and data
loss, just as with any other cloud service. There are on-premises options for
serverless that can give businesses more control over their data and security
posture.
Vendor lock-in: This is a big one. Each provider has its own tools and
frameworks, which makes it very difficult to switch providers if costs go up or
if services are not working as planned. Migration between providers is not
trivial.
DEVOPS
Agile has dramatically changed the software development
landscape by introducing a more efficient and faster way of
delivering software and value to the business. Much of the
improvement in the development process has been focused on
WHAT IS DEVOPS?
In 2009 two employees from Flickr (an image sharing site),
John Allspaw and Paul Hammond, presented a talk titled “10+
Deploys per Day: Dev and Ops Cooperation at Flickr” to a
bunch of developers at the O’Reilly Velocity conference. In
this talk, Allspaw and Hammond said that the only way to
build, test, and deploy software is for development and
operations to be integrated together. The shear audacity of
being able to deploy new software so quickly was the carrot
that fueled the launch of the DevOps concept. Over the years
since then, DevOps has moved from being revered by a small
group of zealots and counterculture types to being a very real
and quantifiable way to operate the machinery of software
creation and release. The vast majority of companies that do
software development are looking for ways to capture the
efficiencies of this model to gain a competitive edge and be
better able to adapt to change from customer and industry
perspectives.
Lean: Reducing wasted efforts and streamlining the process are the goals of
Lean. It’s a management philosophy of continuous improvement and learning.
Measurement: Unless you measure your results, you can never improve.
Success with DevOps requires the measurement of performance, process, and
people metrics as often as is feasible.
The more you sample feedback, the faster you can detect
issues and recover from them. In the Toyota Production
System, the manufacturing line has something called an
Andon Cord, which can be pulled by anyone at any time to
halt the production line. This form of feedback is immediate
and highly visible. It also kicks off the process of swarming a
See problems as they occur and swarm them until they are fixed
DEVOPS IMPLEMENTATION
Step 1. The developer pulls the latest code from the version
control system with Git. This ensures that the
developer has the most recent changes and is not
working with an old version.
Step 2. The developer makes changes to the code, adding
new features or fixing bugs. The developer also
writes test cases that will be used to automatically
test the new code to software functional
requirements. The developer eventually stages the
changes in Git for submission.
Step 3. The developer uses Git to push the code and tests to
the version control system (for example, GitHub),
synchronizing the local version with the remote code
repository stored in the version control system.
Step 4. The continuous integration server, such as Jenkins,
has a login that monitors GitHub for new code
submissions. When Jenkins sees a new commit, it is
DOCKER
Some of the best innovations come from reusing older
technologies and bringing them forward in a new way. That’s
exactly what Solomon Hykes and Sebastien Pahl did back in
2010 when they started a small platform as a service (PaaS)
company called dotCloud. They were looking for a way to
make it easier to create applications and deploy them in an
automated fashion into their service. They started an internal
project to explore the use of some interesting UNIX
technologies that were initially developed in the 1970s to
enable process isolation at the kernel level. What would
eventually become Docker started as a project to use these
capabilities to grow the PaaS business. In 2013, the world was
introduced to Docker, which represented a new paradigm in
UNDERSTANDING DOCKER
Docker containers use two capabilities in the Linux kernel:
namespaces, which provide isolation for running processes,
and cgroups, which make it possible to place resource limits
on what a process can access. These features allow you to run
a Linux system within another Linux system but without
needing to use virtualization technologies to make it work.
From the host operating system’s perspective, you are just
running another application, but the application thinks it is the
only application that is running. Instead of needing to
virtualize hardware, you just share the kernel; you don’t need
to load a full operating system, drivers, and memory
management processes each time you want to run an
application.
Namespaces
Namespaces are essential for providing isolation for
containers. Six namespaces are used for this purpose:
uts (hostname): This namespace controls host and domain names, allowing
unique values per process.
user (UIDs): This namespace is used to map unique user rights to processes.
Cgroups
Cgroups, or control groups, are used to manage the resource
consumption of each container process. You can set how much
CPU and RAM are allocated as well as network and storage
I/O. Each parameter can be managed to tweak what the
container sees and uses. These limits are enforced by cgroups
through the native Linux scheduler and function to restrict
DOCKER ARCHITECTURE
Just like your engine in a car, the Docker Engine is the central
component of the Docker architecture. Docker is a
client/server application that installs on top of a Linux, Mac,
or Windows operating system and provides all the tools to
manage a container environment. It consists of the Docker
daemon and the Docker client. Docker allows you to package
up an application with all of its dependent parts (binaries,
libraries, and code) into a standardized package for software
development.
USING DOCKER
The best way to get familiar with Docker is to use it.
Download the appropriate version of Docker for your
computer and launch its containers. It is available for both
macOS, Windows, and Linux allowing you to install the
Docker engine and command-line tools directly on your local
machine.
Note
As of version 1.13, Docker has changed the command line
to include a more logical grouping for people just getting
started. If you are familiar with the old-style command line,
fear not, those commands still work. The additions of the
management command syntax just add a hierarchy that will
differentiate what you are working on instead of everything
being lumped together. This book uses the new Docker
command structure. If you were to get a question on the
exam that uses the old command line, you could simply omit
the first command after docker. For example, the command
docker container ps would be shortened to docker ps.
$ docker
Manage containers
Commands:
attach Attach local standard input,
output, and error streams to a running
container
commit Create a new image from a
container's changes
cp Copy files/folders between a
container and the local filesystem
create Create a new container
diff Inspect changes to files or
directories on a container's filesystem
exec Run a command in a running
container
export Export a container's filesystem
as a tar archive
inspect Display detailed information on
one or more containers
kill Kill one or more running
containers
$ docker container ls
$ docker container ls -a
Now you can see the container ID, the image name, the
command that was issued, and its current status, which in this
case is exited. You also see any ports or names that were
assigned to the container.
Note
The older syntax for Docker used the ps command instead
of ls (for example, docker ps -a). You can still type docker
container ps -a and get exactly the same output as the
previous example, even though the ps command doesn’t
appear in the menu. This is for legacy compatibility.
root@a583eac3cadb:/# ps
PID TTY TIME CMD
11 pts/0 00:00:00 ps
To disconnect from the container, you can type exit, but that
would stop the container. If you want to leave it running, you
can hold down Ctrl and press P and then while still holding
Ctrl press Q. This sequence drops you back to the host OS and
leaves the container running. From the host OS, you can type
docker container ls to see that your container is actively
running:
$ docker container ls
root@a583eac3cadb:/#
11 pts/0 00:00:00 ps
root@a583eac3cadb:/# uname
Linux
dfe3a47945d2aa1cdc170ebf0220fe8e4784c9287eb84ab0b
ab7048307b602b9
$ docker container ls
You can also use docker container port test-nginx to list just
the port mapping.
We can start with some HTML files that nginx can display. I
have a simple HTML file in a directory called html under
-d -v
~/Documents/html:/usr/share/nginx/html nginx
d0d5c5ac86a2994ea1037bd9005cc8d6bb3970bf998e5867f
e392c2f35d8bc1a
test-nginx
stop and kill both halt the container but don’t remove it from
memory. If you type docker container ls -a, you will still see
it listed. When you want to remove the container from
memory, you can use the rm command. You can also use the
very handy command prune to remove all halted containers
test-nginx
Deleted Containers:
a583eac3cadbafca855bec9b57901e1325659f76b37705922
db67ebf22fdd925
Dockerfiles
A Dockerfile is a script that is used to create a container image
to your specifications. It is an enormous time saver and
provides a ready-made system for replicating and automating
container deployments. Once you have built a Dockerfile, you
can create an infinite number of the same images without
having to manually edit or install software. A Dockerfile is
simply a text file with a structured set of commands that
Docker executes while building the image. Some of the most
common commands are as follows:
MAINTAINER: Lets you select a name and email address for the image
creator.
CMD: Executes a single command within a container. Only one can exist in a
Dockerfile.
WORKDIR: Sets the path where the command defined with CMD is to be
executed.
ADD: Copies the files from the local host or remotely via a URL into the
container’s file system.
USER: Sets the UID (or username) of the user that is to run the container.
FROM ubuntu:16.04
Next, you need to specify the software you want to load and
what components should be added. Just as when loading
software on a Linux server, you first need to run apt-get
update. For this container, you install nginx and any
dependencies:
EXPOSE 80 443
VOLUME /usr/share/nginx/html
FROM ubuntu:latest
MAINTAINER Cisco Champion (user@domain.com)
EXPOSE 80 443
VOLUME /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Docker Images
When working with Docker images, you primarily use the
following commands:
push: Pushes a local image to a remote registry for storage and sharing.
The build is successful. You can see that with the -t flag, you
can set a name that can be used to call the image more easily.
The . at the end of the command tells Docker to look for the
Dockerfile in the local directory. You could also use the -f flag
with a path to the location of Dockerfile file, if it is not in your
current directory. Docker also accepts a URL to the
Dockerfile, such as at GitHub. As shown in this example, the
name Dockerfile must be capitalized and one word.
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED
SIZE
myimage latest 04ae8c714993 48 minutes
ago 154MB
When the image is built and ready to go, you can run it by
using the following command:
bf0889f6b27b034427211f105e86cc1bfeae8c3b5ab279cca
f08c114e6794d94
$ docker ps
DOCKER HUB
Docker Hub is a free service that Docker offers to the
community to house and share prebuilt container images. It is
Docker Hub has a slick web interface that makes it easy to set
up a repository and secure it. It also integrates with GitHub
and Bitbucket to enable automated container creation from
source code stored in your version control system. It’s a great
way to start building a CI/CD pipeline as it is free to get
started. If you set up a free account, you can take advantage of
Docker Hub’s capabilities immediately. Figure 13-28 shows
Docker Hub signup page.
Once you have an account, you can search through the many
containers available and can filter on operating system,
category, and whether or not the image is an official Docker
certified image, verified by the publisher, or an official image
published by Docker itself. Figure 13-29 shows the image
search function.
Once your repository is set up, you need to tag your image to
be pushed to Docker Hub. Use docker image ls to list your
images and take note of the image ID:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED
SIZE
Next, you need to issue the docker image tag command with
the image ID of your image and assign it to
username/newrepo:firsttry. newrepo simply identifies the repo
you want to place the image in, and the tag after the : allows
you to differentiate this particular image from any other in the
You can now check back with Docker Hub and see that the
image is now hosted in your private repository (see Figure 13-
32).
After this, any time you want to retrieve the image, you can
type docker image pull username/newrepo:firsttry, and
Docker will load it in your local image storage.
There is quite a bit more that you can do with Docker, but this
section should get you started and focused on what you need
to know for the 200-901 DevNet Associate DEVASC exam.
Make sure that to review the Docker documentation and try
these examples on your own computer. Nothing beats practical
experience.
ADDITIONAL RESOURCES
Periodic Chart of DevOps: https://xebialabs.com/periodic-
table-of-devops-tools/
Application Security
This chapter covers the following topics:
Identifying Potential Risks: This section introduces some of the concepts
involved in application security and shows how to identify potential risks in
applications.
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
FOUNDATION TOPICS
Detect: It is important to install tools that detect any data breaches or attacks
in a time-sensitive manner or while attacks are happening.
Rando
mize
addres
s
spaces
for
data.
Use
the
built-
in
protec
tion
option
s in
newer
softwa
re
OSs
and
langua
ges.
Avoid
sensiti
ve
data
in
public
Wi-Fi
or
comp
uters.
Detect
and
mark
emails
and
sites
as
spam.
1. Injection
2. Broken authentication
3. Sensitive data exposure
4. XML external entities
5. Broken access control
6. Security misconfiguration
7. Cross-site scripting
8. Insecure deserialization
9. Using components with known vulnerabilities
10. Insufficient logging and monitoring
ID
Description
Impact (low/moderate/important/critical)
Date published
ID: CVE-2020-5313
Impact: Moderate
It provides a list of live hosts and open ports and identifies the OS of every
connected device. This makes Nmap an excellent system-monitoring and pen-
The best way to get familiar with Nmap is to use it. Nmap is
available with macOS and Linux by default. Example 14-1
shows some of the command-line options available.
$ nmap --help
Nmap 7.80 ( https://nmap.org )
Usage: nmap [Scan Type(s)] [Options] {target
specification}
TARGET SPECIFICATION:
Can pass hostnames, IP addresses, networks,
etc.
Ex: scanme.nmap.org, microsoft.com/24,
192.168.0.1; 10.0.0-255.1-254
-iL <inputfilename>: Input from list of
hosts/networks
-iR <num hosts>: Choose random targets
--exclude <host1[,host2][,host3],...>:
Exclude hosts/networks
--excludefile <exclude_file>: Exclude list
from file
HOST DISCOVERY:
-sL: List Scan - simply list targets to scan
-sn: Ping Scan - disable port scan
-Pn: Treat all hosts as online -- skip host
discovery
-PS/PA/PU/PY[portlist]: TCP SYN/ACK, UDP or
SCTP discovery to given ports
-PE/PP/PM: ICMP echo, timestamp, and netmask
PROTECTING APPLICATIONS
An important step in protecting applications is to recognize the
risks. Before we talk about the potential risks, it is essential to
understand some key terms and their relationships:
Hacker or attacker: These terms are applied to the people who seek to
Malicious code: Malicious code is unwanted files or programs that can cause
harm to a computer or compromise data stored on a computer. Malicious code
includes viruses, worms, and Trojan horses.
Tier 1 (Presentation): This tier presents content to the end user through a web
user interface or a mobile app or via APIs. To present the content, it is
essential for this tier to interact with the other tiers. From a security standpoint,
it is very important that access be authorized, timed, and encrypted and that the
attack surface be minimized.
Tier 3 (Data): This is the lowest tier of this architecture, and it is mainly
concerned with the storage and retrieval of application data. The application
data is typically stored in a database server, a file server, or any other device or
media that supports data access logic and provides the necessary steps to
ensure that only the data is exposed, without providing any access to the data
storage and retrieval mechanisms.
Use strong passwords: Using a strong password ensures that only authorized
users can access resources. With strong passwords, it becomes hard to guess
and hence decreases security risk.
Encrypt data: Ensure that data cannot be accessed even if storage can be
reached.
Encryption Fundamentals
Cryptography is the science of transmitting information
securely against potential third-party adversaries. The main
objectives of cryptography are the following:
Integrity: Ensuring that the integrity of a message is not changed while the
message is in transit
Availability: Ensuring that systems are available to fulfill requests all the time
Digital Signatures
You can use a private key for encryption and your public key
for decryption. Rather than encrypting the data itself, you can
create a one-way hash of the data and then use the private key
to encrypt the hash. The encrypted hash, along with other
information, such as the hashing algorithm, is known as a
digital signature. Figure 14-4 illustrates the use of a digital
signature to validate the integrity of signed data. The data and
the digital signature are sent across the network. On the
receiving end, two hashes are calculated: one from the
Data Security
Data
base
encr
yptio
n
Encl
aves,
whic
h are
guar
ded
and
secur
e
mem
ory
segm
ents
As you can see in the figure, the SDLC includes these steps:
Training: Training helps get everyone on the project teams into a security
frame of mind. Teach and train developers on the team to analyze the business
application attack surface as well as the associated potential threats. Not just
developers but all team members should understand the exposure points of
their applications (user inputs, front-facing code, exposed function calls, and
so on) and take steps to design more secure systems wherever possible.
Threat modeling: For every component and module in a system, ask the
“what-how-what” questions: What can go wrong? How can someone try to
hack into it? What can we do to prevent this from happening? Various
frameworks for threat modeling are available, including the following:
Secure coding: Build and use code libraries that are already secured or
approved by an official committee. The industry has several guidelines for
secure coding, and we have listed some of the standard ones here:
Validating inputs
Encoding output
Managing sessions
Code review: Code review is one of the most essential steps in securing an
application. Usually, the rule that you have to keep in mind is that pen testing
and other forms of testing should not be discovering new vulnerabilities. Code
review has to be a way to make sure that an application is self-defending.
Also, it should be conducted using a combination of tools and human effort. It
is important to designate a security lead who can help review code from a
security point of view.
Secure tooling: Static analysis helps catch vulnerabilities. Static analysis tools
detect errors or potential errors in the structure of a program and can be useful
for documentation or understanding a program. Static analysis is a very cost-
effective way of discovering errors. Data flow analysis is a form of static
analysis that concentrates on the use of data by programs and detects some
data flow anomalies.
Testing: Testing includes penetration (pen) testing and system testing, black
box testing, and white box testing. Black box testing is a method used to test
software without knowing the internal structure of the code or program.
Testing teams usually do this type of testing, and programming knowledge is
Firewalls
Stateful inspection firewalls: Packets are examined with other packets in the
flow. Such firewalls monitor the state of active connections and use this
information to determine which network packets to allow. Stateful firewalls
are advanced compared to stateless packet filtering firewalls. They
nslookup stanford.edu :
nslookup with a simple domain name will return
the IP address of the stanford.edu
$ nslookup standford.edu
Server: 2601:647:5500:1ea:9610:3eff:fe18:22a5
Address:
2601:647:5500:1ea:9610:3eff:fe18:22a5#53
$ nslookup stanford.edu
Server: 2601:647:5500:1ea:9610:3eff:fe18:22a5
Address:
2601:647:5500:1ea:9610:3eff:fe18:22a5#53
Non-authoritative answer:
Name: stanford.edu
Address: 171.67.215.200
Non-authoritative answer:
stanford.edu mail exchanger = 10 mxa-
00000d03.gslb.pphosted.com.
stanford.edu mail exchanger = 10 mxb-
00000d03.gslb.pphosted.com.
stanford.edu nameserver = ns5.dnsmadeeasy.com.
stanford.edu nameserver = ns6.dnsmadeeasy.com.
stanford.edu nameserver = ns7.dnsmadeeasy.com.
stanford.edu nameserver = argus.stanford.edu.
stanford.edu nameserver =
atalante.stanford.edu.
Load Balancing
Least connected: Selects the server with the lowest number of connections;
this is recommended for more extended sessions
Cookie marking: Adds a field in the HTTP cookies, which could be used for
decision making
Security: The web servers or application servers are not visible from the
external network, so malicious clients cannot access them directly to exploit
any vulnerabilities. Many reverse proxy servers include features that help
Scalability and flexibility: Clients see only the reverse proxy’s IP address.
This is particularly useful in a load-balanced environment, where you can
scale the number of servers up and down to match fluctuations in traffic
volume.
Web acceleration: Acceleration in this case means reducing the time it takes
to generate a response and return it to the client. Some of the techniques for
web acceleration include the following:
Caching: Before returning the backend server’s response to the client, the
reverse proxy stores a copy of it locally. When the client (or any other
client) makes the same request, the reverse proxy can provide the
response itself from the cache instead of forwarding the request to the
backend server. This both decreases response time to the client and
reduces the load on the backend server. This works great for “static”
content, but there are new techniques that can be used for “dynamic”
content as well.
Content filtering: This involves monitoring traffic to and from the web server
for potentially sensitive or inappropriate data and taking action as necessary.
Paragraph Nmap 42
6
Paragraph Firewalls 43
7
Infrastructure Automation
This chapter covers the following topics:
Controller Versus Device-Level Management: This section compares and
contrasts the two network device management options currently available:
controller-based and device-level management.
Automation Tools: This section covers popular automation tools that are used
for configuration management and network automation.
Infrastructure as Code 3, 4
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
with a false sense of security.
FOUNDATION TOPICS
INFRASTRUCTURE AS CODE
Inspired by software development practices, infrastructure as
code is a new approach to infrastructure automation that
focuses on consistent, repeatable steps for provisioning,
configuring, and managing infrastructure. For a long time,
Declarative: With the declarative approach, the desired state of the system is
defined and then the system executes all the steps that need to happen in order
to attain the desired state.
CONTINUOUS
INTEGRATION/CONTINUOUS
DELIVERY PIPELINES
Software is quickly becoming pervasive in all aspects of our
lives. From smartphones to smart cars and smart homes, we
interact with software hundreds or even thousands of times
each day. Under the DevOps umbrella, there have been several
efforts to improve software development processes in order to
increase the speed, reliability, and accuracy of software
development. Continuous integration/continuous delivery
(CI/CD) pipelines address all these requirements and more. All
the software development processes—from writing code to
building, testing, and deploying—were manually performed
for years, but CI/CD pipelines can be used to automate these
steps. Based on specific requirements for each company,
different tools and different solutions are used to implement
these pipelines. Some companies implement only continuous
integration solutions, and others take advantage of the whole
CI/CD pipeline, automating their entire software development
process.
Instability Stability
Cisco CML/VIRL could be used to create a test network and verify the impact
of the configuration changes in the network.
Ansible could be used to automate the configuration of all the elements in the
network.
AUTOMATION TOOLS
It is common practice for network administrators to perform
Ansible
Ansible is a configuration management and orchestration tool
that can be used for a variety of purposes. It can be used to
configure and monitor servers and network devices, install
software, and perform more advanced tasks such as continuous
deployments and zero-downtime upgrades. It was created in
2012 and acquired by RedHat in 2015. Ansible is appropriate
for both small and large environments and can be used for
managing a handful of servers and network devices or for
managing thousands of devices. It is agentless, meaning there
is no software or service that needs to be installed on the
managed device. Ansible connects to managed devices just as
a regular system or network administrator would connect for
Control node: The control node is any machine that has Ansible installed. All
flavors of Linux and BSD operating systems are supported for the control
node. Any computer, laptop, virtual machine, or server with Python installed
can be an Ansible control node. The exception to this rule is that Microsoft
Windows machines currently cannot be used as control nodes. Multiple control
nodes can run at the same time in the same environment. It is a best practice to
place the control nodes close to the systems that are being managed by Ansible
to avoid network delays.
Managed node: The managed nodes in the Ansible taxonomy are the network
devices or servers that are being managed by Ansible. They are also called
hosts and do not need to have Ansible installed on them or even Python, as
discussed later in this chapter.
Task: Ansible tasks are the units of action. You can run them as ad hoc
commands by invoking them as follows:
$ tree.
└── site.yml
0 directories, 2 files
The hosts file contains an inventory of all the devices that will
be managed by Ansible. For example, the hosts file can look
like this:
$ cat hosts
[iosxe]
10.10.30.171
[iosxe:vars]
ansible_network_os=ios
ansible_connection=network_cli
The brackets are used to define group names. Groups are used
to classify hosts that share a common characteristic, such as
function, operating system, or location. In this case, the [iosxe]
group of devices contains only one entry: the management IP
address of a Cisco CSR1000v router. The vars keyword is
used to define variables. In this example, two variables are
defined for the iosxe group. The ansible_network_os variable
specifies that the type of operating system for this group of
devices is IOS, and the ansible_connection variable specifies
that Ansible should connect to the devices in this group by
using network_cli, which means SSH. Variables can be
$ cat site.yml
---
- name: Test Ansible ios_command on Cisco IOS
XE
hosts: iosxe
tasks:
- name: show version and ip interface brief
ios_command:
commands:
- show version
- show ip interface brief
PLAY RECAP
***********************************************
**************************
*****
The names of the play and the task are displayed to the screen,
as are a play recap and color-coded output based on the status
of the playbook execution. In this case, the playbook ran
successfully, as indicated by the value 1 for the ok status. You
can reuse the output of the two show commands in the
playbook to build custom automation logic, or you can display
it to the screen in JSON format by using the -v option with the
ansible-playbook command.
Puppet
Puppet is a configuration management tool used to automate
configuration of servers and network devices. Puppet was
founded in 2005, making it one of the most venerable
automation tools on the market today. It was created as an
open-source project—and it still is today, but it is also
available as a commercial offering called Puppet Enterprise
that was created by Puppet Labs in 2011. It is written in Ruby
and defines its automation instructions in files called Puppet
manifests. Whereas Ansible is agentless, Puppet is agent
based. This means that a software agent needs to be installed
on each device that is to be managed with Puppet. This is a
drawback for Puppet as there are instances of network devices
in which third-party software agents cannot be easily installed.
Proxy devices can be used in these situations, but the process
is less than ideal and means Puppet has a greater barrier to
cisco_interface { "Ethernet1/3" :
Chef
Chef is another popular open-source configuration
management solution that is similar to Puppet. It is written in
Ruby, uses a declarative model, is agent based, and refers to its
automation instructions as recipes and cookbooks. Several
components are part of the Chef Infra offering, as shown in
Figure 15-5.
Chef Infra Server acts as the main hub for all the configuration
information. Chef Infra Client, which is installed on each
managed device, connects to Chef Infra Server to retrieve the
configuration data that will be enforced on the managed client
device. After each Chef Infra Client run finishes, the run data
is uploaded to Chef Infra Server for troubleshooting and
historical purposes. The actual managed device configuration
work is done as much as possible through Chef Infra Client on
the managed device; offloading these tasks from Infra Server
makes the Chef solution more scalable. Infra Server also
indexes all the infrastructure data, including environments,
nodes, and roles, making them available for searching. The
Chef management console is a web-based interface through
which users can manage nodes, cookbooks and recipes,
policies, roles, and so on.
A Chef node is any device that has the Chef Infra Client
software installed, which means it is managed by Chef Infra. A
large variety of nodes are supported by Chef Infra, including
virtual and physical servers; cloud-based nodes running in
public and private clouds; network devices from vendors such
as Cisco, Arista, F5, and others; and container environments.
The main roles of Chef Infra Client are to register the node
and authenticate to Chef Infra Server using RSA public key
pairs, synchronize cookbooks, configure the node to match the
desired state specified in the cookbooks, and report back to
Infra Server with status reports. Chef has a tool built in to the
Infra Client called ohai that is used to collect system
information such as the operating system, network, memory,
disk, CPU, and other data; ohai is similar to facter in Puppet.
cisco_interface 'Ethernet1/3' do
action :create
ipv4_address '10.1.1.1'
ipv4_netmask_length 24
ipv4_proxy_arp true
ipv4_redirects true
shutdown false
switchport_mode 'disabled'
end
Service manager
Device manager
Mapping logic
Configuration database
The two primary technologies that are being used with Cisco
NSO are the following:
Service dry-run: NSO calculates what the device changes would be if the
service model were to be pushed on the devices in the network.
$ ncs-netsim create-network
$NCS_DIR/packages/neds/cisco-
ios-cli-3.8 3 ios
An ordered list of all the devices that are running at any point
on the NSO server can be obtained by using the list option for
ncs-netsim, as shown in the following output:
$ ncs-netsim list
$ ncs_cli -C -u admin
When all three simulated IOS devices are onboarded and the
configuration changes are committed, the devices can be
displayed through a GET call over the RESTCONF interface.
Cisco NSO exposes its full functionality over this RESTCONF
interface that is available by default starting at
http://<NSO_Server_IP>:8080/restconf/ root. A list of all
the devices that have been onboarded can be obtained by
performing a GET call on the following RESTCONF URI
resource: http://<NSO_Server_IP>:8080/restconf/data/tailf-
ncs:devices/device. The curl command to test this GET call
{
"tailf-ncs:device": [
{
"name": "ios0",
"address": "127.0.0.1",
"port": 10022,
"authgroup": "default",
"device-type": {
"cli": {
"ned-id": "cisco-ios-cli-3.8:cisco-
lab:
description: ''
notes: ''
timestamp: 1581966586.8872395
title: DEVASC official guide
version: 0.0.3
nodes:
- id: n0
label: iosv-0
node_definition: iosv
x: -500
y: 50
configuration: ''
image_definition: iosv-158-3
tags: []
interfaces:
- id: i0
label: Loopback0
type: loopback
- id: i1
slot: 0
label: GigabitEthernet0/0
type: physical
- id: n1
label: csr1000v-0
node_definition: csr1000v
...omitted output
Whenever a change is made to any of these files, a build automation tool such
as Jenkins or Drone can be used to monitor and detect the change in version
and initiate the automation processes.
If the tests pass, a configuration automation solution like Ansible could use
playbooks and automation scripts to apply the tested configuration changes in
the production network.
Profiling the current status of a network and taking a snapshot of both the
configuration status as well as the operational data of the network: This
can be done before and after a configuration change or a software
upgrade/downgrade is performed to ensure that the network still performs
within desired parameters. For example, a snapshot of the network is taken
with pyATS before a software upgrade is performed, and key metrics are
noted, such as number of BGP sessions in the established state or the number
and type of routing table entries or any other metric that is considered critical
for that environment. The software upgrade is completed, and then a new
pyATS snapshot is taken to ensure that those critical metrics have values
within expected parameters. pyATS offers all the tooling to be able to
automatically perform the network snapshots, compare key metric values, set
pass/fail criteria, and even generate reports.
pyats[full] installs all pyATS components, the pyATS library framework, and
optional extras.
pyats[library] installs all the pyATS components without the optional extras.
pyats without any options installs just the pyATS test infrastructure
framework.
---
devices:
csr1000v-1:
type: 'router'
os: 'iosxe'
platform: asr1k
alias: 'uut'
credentials:
default:
username: vagrant
password: vagrant
connections:
cli:
protocol: ssh
port: 2222
ip: "127.0.0.1"
#! /usr/bin/env python
from genie.testbed import load
Network Fundamentals
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
8. What is the bit pattern in the first byte for Class C IPv4
addresses?
1. 110xxxxx
2. 11110xxx
3. 10xxxxxx
4. 1110xxxx
FOUNDATION TOPICS
Networks of devices have been built for more than 50 years
now. The term device is used here to mean any piece of
electronic equipment that has a network interface card of some
sort. End-user devices or consumer devices such as personal
computers, laptops, smartphones, tablets, and printers as well
as infrastructure devices such as switches, routers, firewalls,
and load balancers have been interconnected in both private
and public networks for many years. What started as islands of
disparate devices connected on university campuses in the
1960s and then connected directly with each other on
ARPANET has evolved to become the Internet, where
everyone and everything is interconnected. This chapter
discusses the fundamentals of networking.
The OSI model didn’t find much success when it was initially
developed, and it was not adopted by most network equipment
vendors at the time, but it is still used for education purposes
and to emphasize the importance of separating data
transmission in computer networks into layers with specific
functions.
The Internet Protocol suite and the main protocols that make
up the model, TCP and IP, started as a research project by the
U.S. Department of Defense through a program called Defense
Advanced Research Projects Agency (DARPA) in the 1960s.
It was the middle of the Cold War, and the main purpose of the
project was to build a telecommunications network that would
be able to withstand a nuclear attack and still be able to
function if any of the devices making up the network were
destroyed or disabled.
File Transfer Protocol (FTP) is used for transferring files between a client and
a server.
At the application layer, the general term for the PDU is data.
SWITCHING CONCEPTS
Data frame switching is a critical process in moving data
traffic from a source network endpoint to a destination
endpoint within a local-area network (LAN). This section
introduces several networking concepts: Ethernet, MAC
addresses, VLANs, and switching.
Preamble: This field consists of 8 bytes of alternating 1s and 0s that are used
to synchronize the signals of the sender and receiver.
Source Address: This field contains the address of the source device.
Type: This field contains a code that identifies the network layer protocol.
Data: This field contains the data that was received from the network layer
and that needs to be transmitted to the receiver.
MAC Addresses
Media Access Control (MAC) addresses are used to enable
communication between devices connected to a local network.
Several types of MAC addresses are used to accommodate the
different types of network communications. There are three
major types of network communications:
Unicast: In this type of communication, data frames are sent between one
specific source and addressed to one specific destination. This type of
transmission has one sender and one receiver, and it is the most common type
of traffic on any network.
Broadcast: In this type of communication, data frames are sent from one
source address to all other addresses connected on the same LAN. There is one
sender, but the information is sent to all devices connected to the network.
When the least significant bit of a MAC address’s first byte is 0, it means the
frame is meant to reach only one receiving NIC, which means unicast traffic.
When the least significant bit of the first octet of a MAC address is 1, it means
the frame is a multicast frame, and the receiving NICs will process it if they
were configured to accept multicast MAC addresses. An example of a
multicast MAC address that is used by Cisco Discovery Protocol (CDP) is 01-
00-0C-CC-CC-CC.
When all the bits are set to 1, it means the frame is a broadcast frame, and it is
being received by all the NICs on that network segment.
0000.0c59.beef
00:00:0c:59:be:ef
00-00-0C-59-BE-EF
Switching
Switching is the process through which a data frame is
forwarded from its source toward its destination by a Layer 2
device called a switch. In a typical LAN, all devices connect to
the network either through an Ethernet port available on the
NIC or through a wireless NIC that connects to a wireless
access point that is connected to a switch port. In the end,
whether wired or wireless, all client devices connect to
Ethernet switches. Forwarding of data traffic between devices
at Layer 2 is based on the MAC address table that is stored on
Based on this table, the switch can forward the Layer 2 data
frames toward their destination. If a data frame with MAC
destination address 0050.7966.6705 arrives at the switch with
this MAC address table, the switch will forward that frame out
its GigabitEthernet0/3 port toward the host with that specific
MAC address.
ROUTING CONCEPTS
Routing, or Layer 3 packet forwarding, is the process of
selecting a path through a network. To understand routing, you
need to understand IPv4 and IPv6 addresses. Then you can
learn about the routing process itself.
IPv4 Addresses
The most common protocol at the Internet layer is Internet
Protocol (IP). The Internet, the worldwide network connecting
billions of devices and users, is built on top of IP. There are
currently two versions of IP available: IP version 4 (IPv4) and
IP version 6 (IPv6). Both protocol versions are used in the
Network ID: The network address part starts from the leftmost bit and extends
to the right. Devices on a network can communicate directly only with devices
that are in the same network. If the destination IP address is on a different
network than the network the source IP address is on, a router needs to forward
the traffic between the two networks. The router maintains a routing table with
routes to all the networks it knows about.
Host ID: The host address part starts from the rightmost bit and extends to the
left. The host ID uniquely identifies a specific device connected to the
network. Although the host ID can be the same between different devices on
different networks, the combination of network ID and host ID must be unique
throughout the network.
Class A uses the first byte for the network ID and the other 3
bytes (or 24 bits) for the host ID. As you can imagine,
networks with 224 (more than 16 million) hosts in the same
network are nonexistent; technologies such as classless
interdomain routing (CIDR) and variable-length subnet
masking (VLSM) have been developed to address the wastage
of IPv4 addresses with the original definition of classes of
IPv4 addresses. The first bit of the first byte in a Class A IP
address is always 0. This means that the lowest number that
can be represented in the first byte is 00000000, or decimal 0,
and the highest number is 01111111, or decimal 127. The 0
and 127 Class A network addresses are reserved and cannot be
used as routable network addresses. Any IPv4 address that has
a value between 1 and 126 in the first byte is a Class A
Class B uses the first 2 bytes for the network ID and the last 2
bytes for the host ID. The first 2 bits of the first byte in a Class
B IPv4 address are always 10. The lowest number that can be
represented in the first byte is 10000000, or decimal 128, and
the highest number that can be represented is 10111111, or
decimal 191. Any IPv4 address that has a value between 128
and 191 in the first byte is a Class B address.
Class C uses the first 3 bytes for the network ID and the last
byte for the host ID. Each Class C network can contain up to
254 hosts. A Class C network always begins with 110 in the
first byte, meaning it can represent networks from 11000000,
or decimal 192, to 11011111, or decimal 223. If an IP address
contains a number in the range 192 to 223 in the first byte, it is
a Class C address.
Class A: The default address mask is 255.0.0.0 or /8, indicating that the first 8
bits of the address contain the network ID, and the other 24 bits contain the
host ID.
Class C: The default address mask is 255.255.255.0 or /24, indicating that the
first 24 bits of the address contain the network ID, and the last 8 bits contain
the host ID.
There are still 5 bits left for hosts on each subnet, which
results in 25 = 32 − 2 (1 reserved for the network address and 1
255.255.255.2 11111111.11111111.1111111 / 64 2
52 11.11111100 3
0
255.255.255.2 11111111.11111111.1111111 / 32 6
255.255.255.2 11111111.11111111.1111111 / 16 14
40 11.11110000 2
8
255.255.255.2 11111111.11111111.1111111 / 8 30
24 11.11100000 2
7
255.255.255.1 11111111.11111111.1111111 / 4 62
92 11.11000000 2
6
IPv6 Addresses
IPv4 has valiantly served the Internet from the early days,
when only a handful of devices were interconnected at college
campuses in the United States for research purposes, to today,
when billions of devices are interconnected and exchanging
massive amounts of data. The longevity of IPv4 and the
addressing scheme it defines is a testament to how robust and
scalable IP actually is. Even so, IPv4 faces limitations,
including the insufficiency of 32-bit addresses, the need for the
IP header to be streamlined and simplified, and the need for
Global
Link-local
Multicast
Loopback
Unspecified
Routing
Routing is the process of selecting a path through a network.
This functionality is performed by routers, which are devices
made to interconnect networks, find the best paths for data
packets from their source toward their destination, and route
data based on a routing table. As switches have a critical role
at Layer 2 in connecting devices in the same physical area and
creating local-area networks, routers have a critical role at
Layer 3 in interconnecting networks and creating wide-area
networks. The main functions of a router are the following:
Path determination
After determining the correct path for the packet, the router
forwards the packet through a network interface toward the
destination. Figure 16-10 shows a simple network diagram.
The routing table for the network in Figure 16-10 might look
like Table 16-8.
Following the same logic, let’s assume next that the router
receives a packet on its GigabitEthernet0/0 interface with
destination IP address 172.16.0.25. The router performs a
routing table lookup and finds that it doesn’t have an explicit
route for that network, but it does have a default route
Static routes
Dynamic routes
Default routes
Networking Components
This chapter covers the following topics:
Over the past several decades, Cisco has built the backbone
of what we call the internet today and has developed and
created networks big and small, including enterprise
networks and service provider networks. In Chapter 16,
“Network Fundamentals,” you learned some networking
basics. This chapter covers networking components, such as
the following:
Networking elements such as hubs, switches, and routers
Network device functions, including the management, data, and control planes
Software-defined networking
Caution
The goal of self-assessment is to gauge your mastery of the
topics in this chapter. If you do not know the answer to a
question or are only partially sure of the answer, you should
mark that question as wrong for purposes of self-assessment.
Giving yourself credit for an answer that you correctly guess
skews your self-assessment results and might provide you
FOUNDATION TOPICS
Topology
Standard
Size or location
Bus: In a bus network, all the elements are connected one after the other.
Star: In a star topology, all nodes in the system are connected to a central
point.
Ring: A ring topology is very similar to a star except that a token is passed
around, and a node can transmit only when it has a valid token.
ELEMENTS OF NETWORKS
Now that you have seen the various types of networks, let’s
look at what constitutes a network. A network has several
basic components:
Hubs
Switches
Bridges
Routers
Hubs
Bridges
A bridge connects two physical network segments that use the
same protocol. Each network bridge builds a MAC address
table of all devices that are connected on its ports. When
packet traffic arrives at the bridge and its target address is local
If the bridge is unable to find the target address on the side that
received the traffic, it forwards the frame across the bridge,
hoping the destination will be on the other network segment.
In some cases, multiple bridges are cascaded.
Switches
Recall our look at the OSI model layers in Chapter 16. The
switch described so far in this section is a Layer 2 switch,
which works at Layer 2 of the OSI model (the data link layer).
Say that you have a single switch. By default, all of the ports
on this switch are in one VLAN, such as VLAN 1. Any port
can be configured to be an access port or a trunk port:
Access port: An access port is essentially a port that can be assigned to only
one VLAN. You can change the port membership by specifying the new
VLAN ID.
Routers
A router is a device that forwards packets between networks
via the network layer of the OSI model (Layer 3). It forwards,
or routes, packets based on the IP address of the destination
device. A router also has the intelligence to determine the best
path to reach a particular network or device. It can determine
the next hop, or routing destination, by using routing protocols
such as Routing Information Protocol (RIP), Open Shortest
Path First (OSPF), and Border Gateway Protocol (BGP).
Routing in Software
As discussed earlier in this chapter, routers participate in route
discovery, path determination, and packet forwarding. Route
discovery and path determination are part of the routing
protocol, and as the router becomes part of this discovery
Process switching: The CPU is involved for every packet that is routed and
requires a full routing table lookup. Process switching, shown in Figure 17-13,
is the slowest type of forwarding as each of the packets that the interface
driver receives (step 1 in the figure) is punted and put in the input queue for
the processor for further action (step 2). In this case, the processor receives the
packets (step 3), determines the next interface it needs to send them to, and
rewrites the headers as needed and puts the new packets in the output queue
(step 4). Finally, the kernel driver for the new network interface picks up the
packets and transmits them on the interface (step 5).
Process switching is like doing math on paper: Write down each step and solve
the problem.
Fast switching, using the route cache, is like solving a problem by hand once
and then simply recalling the answer from memory if the same problem is
given again.
CEF switching is like using formulas in an Excel spreadsheet, and when the
numbers hit the cells, the answer is automatically calculated.
Functions of a Router
As discussed earlier in this chapter, a router is a device that
appropriately and efficiently directs traffic from an incoming
interface to the correct outgoing interface. Apart from this
primary function, a router has several other functions:
Domain name proxy: Modern firewalls interact with cloud DNS servers such
as Cisco Umbrella or Open DNS to resolve DNS queries. The queries and, in
turn, the hosts can be stopped at the source if they are deemed harmful or
malicious.
SOFTWARE-DEFINED
NETWORKING
The term software-defined networking (SDN) applies to a type
of network architecture design that enables IT managers and
network engineers to manage, control, and optimize network
Data plane: As described earlier in this chapter, a router can route packets
faster by using techniques such as fast switching or CEF switching. These
techniques for punting packets from the incoming interface to the outgoing
interface operate on what is traditionally known as the data plane. The main
objective of the data plane is to determine how the incoming packet on a port
must be forwarded to an outgoing port, based on specific values in the packet
headers.
Control plane: Routing protocols and other protocols make up the control
plane. The control plane determines how a packet is routed among routers or
other network elements as the packet traverses end-to-end from source host to
destination host. The control plane also deals with packets that are destined for
the router itself. Device and network management are also part of the control
plane. Management functions include initializing interfaces with default
configurations, IP addresses, policies, user accounts, and so on.
SDN Controllers
OpenFlow: The ONF manages this standard used for communication between
the SDN controller and managed network devices.
Cisco Application Centric Infrastructure (ACI): ACI was the first Cisco
SDN solution, and it has three components:
ACI fabric: This is the connection between spine and leaf switches. In
the ACI world, spine and leaf are the Cisco Nexus 9000 Series switches,
which act as the control plane and the data plane of the ACI.
vEdge (data plane): The vEdge router’s job is to forward packets based
on the policies configured with vSmart. The vEdge keeps a constant
connection with vSmart to get updates.
IP Services
This chapter covers the following topics:
Common Networking Protocols: This section introduces protocols that are
commonly used in networks and that you should be familiar with.
Network Address Translation (NAT), while not a protocol per se, deals with
translations between private IP networks and public, globally routable IP
networks.
Network Time Protocol (NTP) is used to synchronize date and time between
Caution
The goal of self-assessment is to gauge your mastery of the
FOUNDATION TOPICS
COMMON NETWORKING
PROTOCOLS
The following sections cover common networking protocols
that network engineers and software developers alike should
be familiar with. Knowledge of these protocols and
technologies will give you better insight into how networks
interact with applications in order to offer network clients an
optimized and seamless experience.
Lease offer
Lease request
Lease acknowledgment
Server Discovery
Lease Offer
A DHCP server that receives a DHCPDISCOVER message
from a client responds on UDP port 68 with a DHCPOFFER
message addressed to that client. The DHCPOFFER message
contains initial network configuration information for the
client. There are several fields in the DHCPOFFER message
that are of interest for the client:
chaddr: This field contains the MAC address of the client to help the client
know that the received DHCPOFFER message is indeed intended for it.
yiaddr: This field contains the IP address assigned to the client by the server.
options: This field contains the associated subnet mask and default gateway.
Other options that are typically included in the DHCPOFFER message are the
IP address of the DNS servers and the IP address lease and renewal time.
Lease Request
Lease Acknowledgment
Releasing
A DHCP client can relinquish its network configuration lease
by sending a DHCPRELEASE message to the DHCP server.
The lease is identified by using the client identifier, the chaddr
field, and the network address in the DHCPRELEASE
The DNS recursive resolver is the server that receives DNS queries from client
machines and is making additional requests in order to resolve the client query.
Root name servers at the top of the DNS hierarchy are the servers that have
lists of the top-level domain (TLD) name servers. They are the first step in
TLD name servers host the last portion of a hostname. For example, the TLD
server in the cisco.com example has a list for all the .com entries. There are
TLD servers for all the other domains as well (.net, .org, and so on).
The authoritative name server is the final step in the resolution process. It is
the authoritative server for that specific domain. In the case of cisco.com, there
are three authoritative servers: ns1.cisco.com, ns2.cisco.com, and
ns3.cisco.com. Whenever a public domain is registered, it is mandatory to
specify one or more authoritative name servers for that domain. These name
servers are responsible for resolving that public domain to IP addresses.
NETWORK ADDRESS
Dynamic NAT (dynamic NAT): With dynamic NAT, the internal subnets that
are permitted to have outside access are mapped to a pool of public IP
addresses. The pool of public IP addresses is generally smaller than the sum of
all the internal subnets. This is usually the case in enterprise networks, where
public IPv4 addresses are scarce and expensive, and a one-to-one mapping of
internal to external subnets is not feasible. Reusing the pool of public IP
addresses is possible as not all internal clients will access the outside world at
the same time.
Port Address Translation (PAT or overloading): PAT takes the dynamic NAT
concept to the extreme and translates all the internal clients to one public IP
address, using TCP and UDP ports to distinguish the data traffic generated by
different clients. This concept is explained in more detail later in this chapter.
With PAT, the same type of table that keeps track of private to
public translations and vice versa is created on the border
device—but in this case TCP and UDP ports are also taken
into account. For example, if the client generates web traffic
and is trying to reach a web server on the Internet, the
randomly generated TCP source port and the destination TCP
port 443 for HTTPS are also included in the network
translation table. In this way, a large number of clients—up to
theoretically 65,535 for TCP and 65,535 for UDP traffic—can
be translated to one public IP address. Figure 18-4 illustrates
PAT, which is also called overloading because many internal
private IP addresses are translated to only one public IP
address.
Reduced costs for renting public IPv4 addresses: With dynamic NAT and
especially with PAT, a large number of private internal endpoints can be
hidden behind a much smaller number of public IP addresses. Since public
IPv4 addresses have become a rare commodity, there is a price for each public
IPv4 address used. Large cost savings are possible by using a smaller number
of public addresses.
Conserving public IPv4 address space: The Internet would not have
witnessed the exponential growth of the past few decades without NAT.
Extensively reusing RFC 1918 private IP addresses for internal networks
helped slow the depletion of IPv4 address space.
It is easily extensible.
Managed devices
SNMP agent
SNMP manager
Managed devices are the devices that are being monitored and
managed through SNMP. They implement an SNMP interface
through which the SNMP manager monitors and controls the
device. The SNMP agent is the software component that runs
on the managed device and translates between the local
management information on the device and the SNMP version
of that information. The SNMP manager, also called the
Network Management Station (NMS), is the application that
monitors and controls the managed devices through the SNMP
agent. The SNMP manager offers a monitoring and
management interface to network and system administrators.
The SNMP components and their interactions are illustrated in
Figure 18-5.
The SNMP agent listens on UDP port 161 for requests from
the SNMP manager. SNMP also supports notifications, which
are SNMP messages that are generated on the managed device
when significant events take place. Through notifications, the
SNMP agent notifies the SNMP manager about these critical
events. The NMS listens for these notifications on UDP port
162. SNMP data structures that facilitate the exchange of
information between the SNMP agent and the NMS are
organized as a list of data objects called a Management
Information Base (MIB). A MIB can be thought of as a map of
all components of a device that are being managed by SNMP.
GetRequest: This is the type of message used by the NMS to request the
value of a variable or list of variables. The agent returns the request
information in a Response message.
Response: The SNMP agent generates this message, which contains the
information the SNMP manager requested.
Trap: The SNMP agent generates this notification message to signal to the
SNMP manager when critical events take place on the managed device.
TROUBLESHOOTING
APPLICATION CONNECTIVITY
ISSUES
Applications are becoming more and more complicated, with
many moving components that need to communicate with
Presence of proxy servers that are intercepting the traffic and denying
connectivity
Misconfigured PAT
Final Preparation
The first 18 chapters of this book cover the technologies,
protocols, design concepts, and considerations required to be
prepared to pass the 200-901 DevNet Associate DEVASC
exam. While those chapters supply the detailed information,
most people need more preparation than simply reading the
first 18 chapters of this book. This chapter provides a set of
tools and a study plan to help you complete your preparation
for the exam.
This short chapter has three main sections. The first section
helps you get ready to take the exam, and the second section
lists the exam preparation tools useful at this point in the study
process. The third section provides a suggested study plan you
can follow, now that you have completed all the earlier
chapters in this book.
GETTING READY
Here are some important tips to keep in mind to ensure that
you are ready for this rewarding exam:
Build and use a study tracker: Consider using the exam objectives shown in
Think about your time budget for questions on the exam: When you do the
math, you will see that, on average, you have one minute per question. While
this does not sound like a lot of time, keep in mind that many of the questions
will be very straightforward, and you will take 15 to 30 seconds on those. This
leaves you extra time for other questions on the exam.
Watch the clock: Check in on the time remaining periodically as you are
taking the exam. You might even find that you can slow down pretty
dramatically if you have built up a nice block of extra time.
Get some earplugs: The testing center might provide earplugs but get some
just in case and bring them along. There might be other test takers in the center
with you, and you do not want to be distracted by their screams. I personally
have no issue blocking out the sounds around me, so I never worry about this,
but I know it is an issue for some.
Plan your travel time: Give yourself extra time to find the center and get
checked in. Be sure to arrive early. As you test more at a particular center, you
can certainly start cutting it closer time-wise.
Get rest: Most students report that getting plenty of rest the night before the
exam boosts their success. All-night cram sessions are not typically successful.
Bring in valuables but get ready to lock them up: The testing center will
take your phone, your smartwatch, your wallet, and other such items and will
provide a secure place for them.
Take notes: You will be given note-taking implements and should not be
afraid to use them. I always jot down any questions I struggle with on the
exam. I then memorize them at the end of the test by reading my notes over
and over again. I always make sure I have a pen and paper in the car, and I
write down the issues in my car just after the exam. When I get home—with a
pass or fail—I research those items!
The Pearson Test Prep practice test software comes with two
full practice exams. These practice tests are available to you
either online or as an offline Windows application. To access
the practice exams that were developed with this book, please
see the instructions in the card inserted in the sleeve in the
back of the book. This card includes a unique access code that
enables you to activate your exams in the Pearson Test Prep
software.
Step 1. Go to http://www.PearsonTestPrep.com.
Step 2. Select Pearson IT Certification as your product
group.
Step 3. Enter your email and password for your account. If
you don’t have an account on
PearsonITCertification.com or CiscoPress.com, you
If you wish to study offline, you can download and install the
Windows version of the Pearson Test Prep software. You can
find a download link for this software on the book’s
companion website, or you can just enter this link in your
browser:
http://www.pearsonitcertification.com/content/downloads/pc
pt/engine.zip
Study mode
Premium Edition
Because you have purchased the print version of this title, you
can purchase the Premium Edition at a deep discount. There is
a coupon code in the book sleeve that contains a one-time-use
code and instructions for where you can purchase the Premium
Edition.
CHAPTER 3
1. D. The correct command is python3 -m (for module)
venv myvenv (which can be whatever you choose to
name your virtual environment).
2. B, C. PyPI is a repository that holds thousands of
Python modules that you can import. To install it, you
can use python3 -m (module) pip install and the name
of the package you want. You can also directly install
it with the pip command.
3. B. PEP 8 is the style guide for Python syntax, and it
specifies four spaces for each block of code. Tabs will
work, and your editor may actually convert them
automatically for you, but it is a good practice to
follow the standard.
4. B. Comments are specified by the # or three single
quotes '''. The benefit of using the single quotes is that
you can write multiline text.
5. A, B. Lists and dictionaries are both mutable, or
changeable, data types. Integers and tuples must be
replaced and can’t be edited, which makes them
immutable.
6. B, D. You can create an empty dictionary object by
assigning the function dict() to a Python object (in this
example, a). You can insert dictionary values as well
by using braces, {}, and key:value pairs that you
CHAPTER 4
1. C. A function in Python uses the def keyword
followed by a function name and parentheses with an
optional argument inside.
2. D. Python has specific rules for variable names. A
variable cannot be named using reserved keywords,
such True, and must start with a letter or an
underscore but not a number.
3. B. A docstring is text that describes the purpose and
use of a Python function, and it is located on the very
next line after the function definition. The docstring
CHAPTER 5
1. C. The end-of-line is the last character of a line of text
before the text wraps to the next line. This is identified
CHAPTER 6
1. A. Southbound APIs send information down to
devices within the network.
2. A, B. Because asynchronous APIs do not have to wait
for replies, they reduce the time required to process
data.
3. A, D, E. SOURCE and PURGE do not exist. GET,
POST, PUT, PATCH, and DELETE are the HTTP
functions.
4. A. Both API keys and custom tokens are commonly
used within API authentication.
5. C. SOAP stands for Simple Object Access Protocol.
6. A, B, D, E. The four main components of a SOAP
message are the envelope, header, body, and fault. The
fault is an optional component.
7. A. RPCs are blocked during the waiting periods. Once
a procedure is executed and the response is sent from
the server and received on the client, the execution of
the procedure continues. This is similar to a
synchronous API.
CHAPTER 7
1. A, B. In order to make a successful call—whether it is
a GET or a POST—a client must have the URL, the
method, an optional header, and an optional body.
CHAPTER 8
1. A, B, C. A good SDK is easy to use, well documented,
integrated well with other SDKs, has a minimal
impact on hardware resources, and provides value-
added functionality.
2. A, B. Some of the advantages of using an SDK are
quicker integration, faster and more efficient
CHAPTER 9
CHAPTER 10
1. A, B, C. Cisco’s collaboration portfolio allows video
calling, integration of bots, Remote Expert use cases.
2. A, B, D. Teams allows users, third-party apps, and
bots to interact with its APIs.
3. C. A JWT token is generated using the guest issuer ID
CHAPTER 11
1. B. The Investigate API provides enrichment of
security events with intelligence to SIEM or other
security visibility tools.
2. C. The Umbrella Enforcement API involves an HTTP
POST request, which internally comprises the
Investigate API to check whether the domain is safe.
3. A. The response header contains the token X-auth-
access-token, which needs to be used in all
CHAPTER 12
1. A, B, D. There are three standards-based
programmable interfaces for operating on the YANG
data models: NETCONF, RESTCONF, and gRPC.
2. B. By default, the NETCONF server on the device
runs on TCP port 830 and uses the SSH process for
transport.
3. D. Messages sent with NETCONF use remote
procedure calls (RPCs), a standard framework for
clients to send a request to a server to perform an
action and return the results.
4. C. YANG defines a set of built-in types and has a
mechanism through which additional types can be
defined. There are more than 20 base types, including
binary, enumeration, and empty. Percent is not a built-
CHAPTER 13
1. D. A SaaS provider offers software for use and
maintains all aspects of it. You may be allowed to
customize parts of the software configuration, but
typically you are not allowed to change anything
regarding how the software functions.
CHAPTER 14
1. B. A vulnerability is a weakness or gap in protection
efforts. It can be exploited by threats to gain
unauthorized access to an asset.
CHAPTER 15
1. B, D. Historically, network devices were managed
through command-line interfaces (CLIs) using
protocols such as Telnet and Secure Shell (SSH).
2. C. We’ve seen in Chapter 8 an example of a network
controller with Cisco DNA Center. Cisco DNA Center
can be used to completely configure, manage, and
monitor networks.
3. B, C. There are usually two types of approaches to
infrastructure as code: declarative and imperative.
With the declarative approach, the desired state of the
CHAPTER 16
1. B. The transport layer, as the name suggests, is
responsible for end-to-end transport of data from the
source to the destination of the data traffic.
Connection-oriented protocols at the transport layer
establish an end-to-end connection between the sender
and the receiver, keep track of all the segments that
are being transmitted, and have a retransmission
mechanism in place.
2. C. The Internet layer in the TCP/IP reference model
corresponds in functions and characteristics to the
network layer in the OSI model.
3. B, D. There are a large number of application layer
protocols, including the Hypertext Transfer Protocol
(HTTP), which is used for transferring web pages
between web browsers and web servers, and File
Transfer Protocol (FTP), which is used for transferring
files between a client and a server.
4. D. The TCP/IP reference model transport layer PDU is
called a segment.
CHAPTER 17
1. A. Ethernet is a star topology that used concentrators
to connect all the nodes in the network.
2. C. A WAN covers a large geographic area and uses
public networks.
3. B. With a hub, every frame shows up at every device
attached to a hub, which can be really harmful in
terms of privacy.
CHAPTER 18
1. A, D. Some of the benefits of using DHCP instead of
manual configurations are centralized management of
network parameters configuration and reduced
network endpoint configuration tasks and costs.
2. C. The DNS recursive resolver is a server that receives
DNS queries from client machines and makes
additional requests in order to resolve a client query.
3. A, B, E. The benefits of Network Address Translation
include conserving public IPv4 address space,
additional security due to hiding the addressing for
internal networks, and flexibility and cost savings
when changing ISPs.
4. A. With SNMPv3, the focus was on security, and
additional features such as authentication, encryption,
Covers new topics if Cisco adds new content to the exam over time
Step 1. Browse to
www.ciscopress.com/title/9780136642961.
Step 2. Click the Updates tab.
Step 3. If there is a new Appendix B document on the page,
download the latest Appendix B document.
Note
The downloaded document has a version number.
Comparing the version of the print Appendix B (Version
1.0) with the latest online version of this appendix, you
should do the following:
Same version: Ignore the PDF that you downloaded from the companion
website.
Website has a later version: Ignore this Appendix B in your book and read
only the latest version that you downloaded from the companion website.
B
Bidirectional-streams Over Synchronous HTTP (BOSH) A
transport protocol that emulates the semantics of a long-lived,
bidirectional TCP connection between two entities.
C
campus-area network (CAN) A network that consists of two
or more LANs within a limited area.
D
data as a service (DaaS) A data management solution that
uses cloud services to deliver data storage, integration, and
processing.
E
edge computing An application deployment model that is
typically used when you need local computing decision-
making capabilities closer to where sensors are deployed.
Edge computing reduces latency and can significantly reduce
the amount of data that needs to be processed and transmitted.
F
File Transfer Protocol (FTP) A protocol used for transferring
files between a client and a server.
G
Git A free distributed version control system created by Linus
Torvalds.
H
hybrid cloud A cloud application deployment model that
stretches components of an application between on-premises
or private cloud resources and public cloud components.
I
integration test A type of test that tests a combination of
individual units of software as a group. APIs are often tested
with integration tests.
J
JSON JavaScript Object Notation is a lightweight data storage
format inspired by JavaScript.
L
Lean A management style that focuses on continuous
improvement and reducing wasted effort.
M
managed object (MO) An object that is part of the MIT.
N
Network Address Translation (NAT) A mechanism of
mapping private IP addresses to public IP addresses.
P
phishing A type of attack that aims to procure sensitive
information via emails or web pages.
R
registry An area where containers are stored in the container
infrastructure.
S
Secure Sockets Layer (SSL) An encryption-based Internet
security protocol.
T
test-driven development (TDD) An Agile development
method that involves writing a failing testing case first and
then writing code to make it pass. TDD is intended to focus
the developer on writing only what is needed to reduce
complexity.
U
Unified Communications Cisco equipment, software, and
services that combine multiple enterprise communications
channels, such as voice, video, messaging, voicemail, and
content sharing.
V
variable-length subnet masking (VLSM) A way to indicate
how many bits in an IP address are dedicated to the network
ID and how many bits are dedicated to the host ID.
W
Waterfall A linear and sequential process for software
development.
Y
YAML YAML Ain’t Markup Language is a data serialization
and a data storage format that is generally used to store
configuration.
SYMBOLS
**kwargs, 90–91
*args, 90–91
== operator, 70
!= operator, 70
> operator, 70
>= operator, 70
< operator, 70
<= operator, 70
''' (triple-quote) method, 66
( ) characters, Python, 68
| character, piping commands, 34
+ operator, 72, 73
# character, Python comments, 65
NUMBERS
300–401 ENCOR exam, 8–9
A
AAA policies, Cisco ACI MIT, 220
acceleration (web), reverse proxies, 445
B
bare-metal application deployments, 382–383
base 2 (binary) systems, Python, 69
base 8 (octal) systems, Python, 69
base 16 (hex) systems, Python, 69
base URL, Dashboard API (Meraki), 180, 181–182
Base64 encoding
Cisco Umbrella, 306
ERS API, 328
Unified CC (Finese) API, 277
BASH, 32–33
cat command, 34, 37
cd command, 35
cp command, 36
directories
changing, 36
creating, 36
listing, 36
navigating, 35–36
printing paths, 35
environment variables, 37–38
file management, 36–37
ls command, 36
man command, 33
C
cache constraints (REST), 161
caching
DNS resolution data, 539
reverse proxies, 445
calendar modules, 99
calendars, DevOps, 388–389
Call Manager. See Unified CM
camera positions, setting (xAPI), 292
CAN, 513–514
Captive Portal API, 178
career certification (Cisco)
CCIE, 11
CCNP
300–401 ENCOR exam, 8–9
components of, 8–9
concentration exams, 9–11
core exams, 9–11
levels of, 6–7
D
daemon (Docker), 400, 401, 404, 412
Dashboard API (Meraki)
action batches, 181
E
earplugs, exam preparation, 552
edge computing, 381–382
editing code, 55, 64–65
EIGRP, 201
F
fabric interconnects, 230
fabric policies, Cisco ACI MIT, 220
fast switching, 523, 524
FCS, 491, 492
feedback loops, DevOps, 392–393
feeds, Threat Grid, 335–337
files/folders. See also directories
BASH file management, 36–37
copying, 36
creating, 37
CSV files, 110–113
deleting, 37
FTP, 490
Git
committing files, 45, 53–54
file lifecycles, 40–41
git add command, 43–45
git clone command, 42, 46
git commit command, 45
git init command, 42–43, 46
git mv command, 44
git pull command, 47
git push command, 46–47
git remote command, 46
G
gadgets, Unified CC (Finese) API, 281
GET requests
APIC REST API, 225–227
HTTP, 149, 150, 151–152, 161, 165, 168, 170
GetBulkRequest messages (SNMP), 545
getdesc() function, 93–95
GetNextRequest messages (SNMP), 545
GetRequest messages (SNMP), 545
Git, 39, 42
branches, 47–48
adding, 48–49
conflicts, 52–53
merging, 50–53
cloning/initiating repositories, 42–43
committing files, 45, 53–54
file lifecycles, 40–41
git add command, 43–45, 52
git branch command, 48–49
git checkout command, 49
git clone command, 42, 46
git commit command, 45, 52
git diff command, 53–54
git init command, 42–43, 46
git log command, 47–48
git merge command, 50, 51, 52
H
hackers, 429
I
IaaS, 378
IANA, IPv6 addresses, 502
ID
host ID, IPv4 addresses, 497
network ID
Dashboard API (Meraki), 180, 183–187
IPv4 addresses, 497
J
JSON, 113–115, 156
Cisco AMP for Endpoints, listing vulnerabilities, 324–325
data format, 157
json modules, 102
K
Kitematic, 415–416
knowledge domains (certification)
DEVASC, 12
DevNet Professional, 13
**kwargs, 90–91
L
LAMP, containerized applications, 385–386
LAN, 492, 513
Layer 2 network diagrams, 547
Layer 2 switches, 518–519
Layer 3 network diagrams, 547
Layer 3 packet forwarding. See routing
Layer 3 switches, 519
layered system constraints (REST), 162
leaf nodes (YANG), 349–350
leaf-list nodes (YANG), 350
Lean model (SDLC), 28–29
lease acknowledgement phase (DHCP), 537
lease offer phase (DHCP), 537
lease request phase (DHCP), 537
lifecycle of software development. See SDLC
M
MAC addresses, 493–494, 495–496
Mac computers, Python installations, 61
N
name resolution, DNS, 539
namespaces, Docker, 398–399
napalm, 103
NAT, 540–541
benefits of, 542
disadvantages of, 542–543
dynamic NAT, 541
IPv4, 543
PAT, 541–542
static NAT, 541
native data models, YANG, 354–355
ncclient modules, 103
ncs-netsim, 469–470
negotiation, content negotiation, versioning, 162
NETCONF, 344
Cisco IOS XE, NETCONF sessions, 360–362
Cisco NSO, 468
Cisco NX-OS and, 363–365
CLI commands, 345–346
configuration data stores, 346–347
loopback interfaces, 365–367
notification messages, 356–357
operations, 346
purpose of, 345
O
ObjectType property (Cisco Intersight), 248
Observer software design pattern, 31–32
octal (base 8) systems, Python, 69
OID values, 545
OMP, 203
one-way hash (data integrity), 432
OOP, Python, 91–92
Open Automation, UCS Director, 241
open data models. See IETF data models
open() function, 109–110
Q
query parameter versioning, 162
R
ransomware, 320
rate limiting
Cisco DNA Center, 192
Dashboard API (Meraki), 180
REST API, 163–164
Receiver fault code, 139
recipes (Chef), 465–466
registry (Docker), 401, 405, 414–415
relay agents, DHCP, 535–536
releasing phase (DHCP), 537
remainders, Python, 69
removing, files from repositories, 43–45
Reporting API, 305
repositories
adding/removing files, 43–45
cloning/initiating, 42–43
creating, 42–43
request headers, 153–154
requests
Dashboard API (Meraki), 180
GET requests, APIC REST API, 225–227
S
SaaS, 378
Sample API, 334
sandboxes, UCS Manager, 234
Scanning API, 178–179
script modules, UCS Director, 242
scripts, XSS, 304, 424
SDK, 176–178
advantages of, 177
AXL SDK, 297–298
Cisco DNA Center, 189–190
authorization, 193–194
client data, 198–199
Integration API, 190
Intent API, 190, 191–192
T
tabs
in Atom text editor, 65
Python and, 65
tagging, Cisco Intersight, 248
Tags property (Cisco Intersight), 248
tan() method, 99
tasks, UCS Director, 240, 242
TCP, 489–490
TCP/IP network model, 488
application layer, 490
data packet routing, 489
data routing, 489
de-encapsulation, 491
encapsulation, 491
Internet layer, 489
network access layer, 488–489
PDU, 490–491
PDU. See also frames
TCP, 489–490
transport layer, 489–490
U
UCS Director, 239–240
Cisco ACI, 240
Open Automation, 241
PowerShell, 242
REST API, 242–245
script modules, 242
SDK, 241
tasks, 240, 242
templates, 240
user profiles, retrieving, 244–245
workflows, 240, 241, 245
UCS Manager, 230, 232
API, 231
authentication, 234
Cisco UCS Python SDK, 237–239
CLI, 231
compute object lists, 234–237
connectivity, 230
DME, 231
DN, 232
documentation, 233
event subscription, 233
GUI, 231
W
Walk category (DevNet Automation Exchange), 19
WAN, 515–516. See also Cisco SD-WAN
Waterfall model (SDLC), 27, 28
phases of, 27
value problem, 28
web acceleration, reverse proxies, 445
Webex (Cisco), 260
Webex Board (Cisco), 260
Webex Calling, 258–259
Webex Devices, overview of, 289–290
Webex Meetings API
architecture of, 282, 284
X
xAPI
authentication, 290–291
creating sessions, 291
current device status, 291–292
event notification webhooks, 293
session authentication, 291–293
setting device attributes, 292
categories, 290
overview of, 290
People Presence, 294
XML, 115–117, 155–156
AXL API, 294–295
Y
YAML, 117–119, 157–158, 461
YANG, 347, 348
built-in data types, 348
Cisco NSO, 468, 469
container nodes, 350–351
data models
augmenting, 355–356
components of, 352
Z
Zeep Client Library, 296–297
Study Planner
Key:
Practice Test
Reading
Review
Element Task Go First Second N
al Date Date o
Da Compl Completed te
te eted (Optional) s
Coupon Code:
Cisco Certified
www.ciscopress.com/title/9780136642985
Coupon Code:
If you wish to use the Windows desktop offline version of the application, simply
This access code can be used to register your exam in both the online and offline
versions.
Activation Code:
1. Go to www.ciscopress.com/register.
6. Under the book listing, click on the Access Bonus Content link.