IT Security

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 139

Table of Contents

Chapter 1: Understanding Security Threats...............................................................................................4


Chapter 1: Introduction..............................................................................................................................5
Malicious Software................................................................................................................................6
The CIA Triad...................................................................................................................................6
Essential Security Terms...................................................................................................................7
Malicious Software...........................................................................................................................8
Malware Continued........................................................................................................................10
Network Attacks..................................................................................................................................11
Network Attacks.............................................................................................................................11
Denial-of-Service............................................................................................................................12
Other Attacks.......................................................................................................................................13
Client-Side Attacks.........................................................................................................................13
Password Attacks............................................................................................................................14
Deceptive Attacks...........................................................................................................................14
Chapter 1Test Questions and Answers................................................................................................16
Chapter 2: Pelcgbybtl (Cryptology!)........................................................................................................18
Symmetric Encryption.........................................................................................................................19
Cryptography..................................................................................................................................19
Symmetric Cryptography................................................................................................................21
Symmetric Encryption Algorithms.................................................................................................23
Public Key or Asymmetric Encryption...............................................................................................25
Asymmetric Cryptography.............................................................................................................25
Asymmetric Encryption Algorithms...............................................................................................27
Hashing................................................................................................................................................28
Hashing Algorithms........................................................................................................................30
Cryptography Applications..................................................................................................................33
Public Key Infrastructure................................................................................................................33
Cryptography in Action..................................................................................................................35
Securing Network Traffic...............................................................................................................37
Cryptographic Hardware................................................................................................................39
Chapter 2 Test Questions and Answers...............................................................................................41
Hands on Lab: Create/Inspect Key Pair, Encrypt/Decrypt and Sign/Verify using Openssl................44
Hands on with Hashing.......................................................................................................................47
Chapter 3: AAA Security (Not Roadside Assistance)..............................................................................50
Authentication.....................................................................................................................................51
Authentication Best Practices.........................................................................................................51
Multifactor Authentication..............................................................................................................52
Certificates......................................................................................................................................56
LDAP..............................................................................................................................................57
Radius.............................................................................................................................................58
Kerberos..........................................................................................................................................59
TACAS+.........................................................................................................................................62
Single Sign-On...............................................................................................................................62
Authentication Quiz Questions and Answers.................................................................................74

1
Authorization.......................................................................................................................................77
Authorization and Access Control Methods...................................................................................77
Access Control................................................................................................................................77
Access Control List.........................................................................................................................79
Accounting..........................................................................................................................................81
Tracking Usage and Access............................................................................................................81
Chapter 3 Questions and Answers.......................................................................................................82
Chapter 4: Securing Your Networks........................................................................................................84
Secure Network Architecture..............................................................................................................85
Network Hardening Best Practices.................................................................................................85
Network Hardware Hardening........................................................................................................87
Network Software Hardening.........................................................................................................90
Secure Network Architecture..........................................................................................................92
Wireless Security.................................................................................................................................93
WEP Encryption and Why You Shouldn’t Use it...........................................................................93
Let’s Get Rid of WEP! WPA/WPA2...............................................................................................95
Wireless Hardening.........................................................................................................................99
Quiz: Wireless Security................................................................................................................100
Network Monitoring..........................................................................................................................101
Sniffing the Network....................................................................................................................101
Wireshark and TCPDump.............................................................................................................102
Intrusion Detection/Prevention Systems.......................................................................................110
Quiz: Network Monitoring...........................................................................................................112
Chapter 4 Questions and Answers.....................................................................................................112
Introducing TCPdump (Lab).............................................................................................................115
Chapter 5: Defence in Depth..................................................................................................................120
System Hardening.............................................................................................................................121
Intro to Defence in Depth.............................................................................................................121
Disabling Unnecessary Components............................................................................................121
Host-Based Firwalls......................................................................................................................122
Logging and Auditing...................................................................................................................124
Anti-malware Protection...............................................................................................................126
Disk Encryption............................................................................................................................128
Application Hardening......................................................................................................................131
Software Patch Management........................................................................................................131
Application Policies......................................................................................................................132
Application Hardening Quiz.........................................................................................................133
Defence in Depth Test.......................................................................................................................134
Case Study: The Security Model of Chrome OS...............................................................................135
The Security Principles of Chrome OS........................................................................................135
Chrome OS Verified Boot.............................................................................................................136
Chrome OS Data...........................................................................................................................137
Formative Quiz – Chrome OS......................................................................................................138
Chapter 6: Creating a Company Culture for Security............................................................................139
Risk in the Workplace........................................................................................................................140
Security Goals...............................................................................................................................140
Measuring and Assessing..............................................................................................................142

2
Privacy Policy...............................................................................................................................144
Quiz: Risk in the Workplace.........................................................................................................145
Users..................................................................................................................................................146
User Habits...................................................................................................................................146
Third-Party Security.....................................................................................................................148
Security Training..........................................................................................................................149
Users: Quiz...................................................................................................................................150
Incident Handling..............................................................................................................................151
Incident Reporting and Analysis...................................................................................................151
Incident Response and Recovery..................................................................................................153
Quiz: Incident Handling...............................................................................................................154
Chapter 6: Test and Answers........................................................................................................155

3
IT Security
Chapter 1: Understanding Security Threats
Welcome to the IT Security course of the IT Support Professional Certificate! In the first week of this
course, we will cover the basics of security in an IT environment. We will learn how to define and
recognize security risks, vulnerabilities and threats. We'll identify the most common security attacks in
an organization and understand how security revolves around the "CIA" principle. By the end of this
module, you will know the types of malicious software, network attacks, client-side attacks, and the
essential security terms you'll see in the workplace.
Learning Objectives:
• Define and recognize security risks, vulnerabilities and threats.
• Be able to identify the most common security attacks.
• Understand how security revolves around the CIA principle.

4
Chapter 1: Introduction
I'm talking about security. Without it, all of the processes you've learned so far can fail, and no IT girl
or guy wants that.
Before we dive in, I'd like to reintroduce myself. We met way back in the first course when we talked
about the history of the Internet and the Internet of Things. My name is Gian Spicuzza, and I'm the
Program Manager in Android Security. I help protect Androids, two billion plus devices, by managing
and driving new security features for each Android desert release or versions of Android. As early as I
can remember, I've loved technology. I've worked in IT since I was 16-years-old, and I'd fill my time
reading books about new tech and building servers from old computer parts down in my parents
basement. I was never a very good test-taker, and my grades definitely reflected that. But I didn't let it
stop me from pursuing my career. I worked as the one person IT crew for three non-profits while I was
getting my education. It was really stressful being responsible for everything, from configuring and
administering databases to showing new employees how to access email and internal tools. Now,
looking back, this experience was invaluable, and of course, security was an essential part of my IT
work. Now, I work directly with hardware manufacturers, app developers, and engineering teams
within Google to create the most secure experiences for our users.
For many of them, their cell phone is the only connection they have to the Internet, and I feel such a
sense of fulfillment knowing that my work can have a major impact on people all over the world who
rely so heavily on their devices. To be successful at cybersecurity, sometimes you need to put yourself
in the mindset of an attacker and always be one step ahead. So, are you ready to do good by thinking
bad? Let's jump right in. In this module, you're going to be learning all about security, how people
attack it and how do we defend against these attacks.
By the end of the module, you'll be able to define and recognize security risks, vulnerabilities, and
threats. You'll also be able to identify the most common security attacks. Finally, you'll understand how
security revolves around the CIA principle and what the CIA principle is. When you think of security,
what's the first thing you think of? It's probably physical security, stuff like making sure your
belongings are safe from potential thieves, locking your front doors at night, and putting your valuables
in a safe place. But in today's digital world, your money isn't just in your wallet, your cash is also
stored inside online bank accounts accessible with the right password. Some of us don't carry credit
cards at all, and those who do, don't just have them in their wallets. They're stored on their favorite
websites so that they can make purchases more easily. It's not just money we care about. Most of our
entire personal world lives on mobile phones. Our text messages, photos, personal data, application
logins, and more are all kept right inside the devices we have in our pockets. Unfortunately, we live in a
world where there are people and organizations that try to steal data from companies, from
governments, and even from people like you and me. Don't let movies warp your perception of these
digital thieves. They aren't as glamorous or professional as you may think. Digital thieves don't have a
team of hackers and dark hoodies furiously typing into computer terminals all day hoping to break into
multi-billion dollar companies. That's not to say that doesn't happen because we all know it does. But
most of the time, the average Internet attacker is someone who looks just like you and me, a regular
person who happened to stumble upon a hole in your security system and then took advantage of it. It
could have been something as simple as figuring out that you use your dog's name as your password.
When the only thing securing your bank account is the word Fido, you're in trouble. But just like we
have physical security alarms to deter potential burglars, we also have many methods to prevent our
digital security from being compromised.

5
By the end of this course, you'll gain a deeper understanding of computer security. You'll learn how to
prevent the most commonly used computer attacks. You'll understand the various security protocols and
mechanisms that we use on our machines in the web and on our networks. You'll also learn more about
cryptography, authentication, and access mechanisms, which are important skills for any IT support
specialist. We'll wrap up the course by giving you the necessary tools to assess the security of an
organization and decide on the best security preventative measures. Today, just about every business or
industry relies heavily on technology to conduct day-to-day business, can you imagine a company,
large or small, operating without email, without functional computers or Internet access. Take the case
of a small company. It needs some technology if it wants to be able to access credit cards. Recent
attacks like the WannaCry cryptoworm and large scale attacks using the Mirai botnet highlight the
scope and scale of how security affects us all. It's something we need to take seriously. Because of our
widespread dependence on technology, digital security is more important than ever before, and it's
going to continue to have a growing impact on all industries and aspects of our lives. So let's make sure
you're armed with the right tools to keep yourself and your future clients safe.

Malicious Software
The CIA Triad
Throughout this course, there'll be one key acronym to keep in mind, the CIA. No, I'm not talking
about the U.S. Central Intelligence Agency, although they do have a lot to do with national security.
When I say CIA, I'm talking about confidentiality, integrity, and availability ("Confidentiality"
signifies that data is only viewable by those authorized to view it; "Integrity" denotes that data
won't be manipulated or corrupted; and "Availability" means that services remain reachable and
available). These three key principles are the foundation for what's widely referred to as the CIA triad,
a guiding model for designing information security policies.
These three principles will help you develop security policies in the workplace and for your own
personal environments.
Let's start with confidentiality. Confidentiality means keeping things hidden. In I.T., it means keeping
the data that you have hidden safely from unwanted eyes. One particular method of confidentiality that
you probably use everyday is password protection. Only you, maybe your partner, should know the
password to gain access to your bank account online. For confidentiality to work, you need to limit
access to your data. Only those who absolutely need to know how to gain access, should.
The I in CIA stands for integrity. Integrity means keeping our data accurate and untampered with. The
data that we send or receive should remain the same throughout its entire journey. Imagine if you
downloaded a file off the Internet, and the website you're downloading it from, says the file is three
megs. Then, when you download it, it turns out to be about 30 megs. That's a red flag. Something
happened during the download, something potentially unsafe. An unwanted file may now be living on
your hard drive. As you'll learn in a later lesson, this happens all too often.

6
Last but not least. Let's look at the A in CIA, which stands for availability. Availability means that the
information we have is readily accessible to those people that should have it. This can mean many
things, like being prepared if your data is lost or if your system is down. Security attacks are designed
to steal all kinds of things from you, time, material things, your dignity. Some steal the time that you'll
need to spend to get services back up and running. Some security attacks will hold your system
hostage, until you pay a ransom for it. Sounds scary and it is but that's why you're here, to learn how to
stop these types of attacks from happening. Going through this course, you'll see how every aspect of
security revolves around these three key principles: confidentiality, integrity and availability.

Essential Security Terms


Before we dive into things like bringing down digital thieves, let's get some of the terminology out of
the way. We'll be using these terms throughout the entire course. So you should know them inside and
out before we get started.
The first one is Risk, the possibility of suffering a loss in the event of an attack on the system. Let's say
that you buy a new phone. One security measure you can take to protect your device, is to set up a
screen lock using a password or pattern that you add to prevent others from accessing your info. A
screen lock is a security feature that helps prevent unwanted access by creating an action you have to
do to gain entry. If you choose not to add a screen lock to your phone, the risk that you take is that
someone could easily gain access to your phone and steal your data. Even adding something as simple
as a passcode or a screen lock can help you protect your personal or company data from getting into the
wrong hands.
Next up is the term Vulnerability. A flaw in the system that could be exploited to compromise the
system. Vulnerabilities can be holes that you may or may not be aware of. Maybe you go away for a
long vacation and lock every door and window in your house before you leave. But you forget to lock
the bathroom window. That bathroom window is now a vulnerability that burglars can use to break into
your house. Another example is when you're writing a web app and enable a de-bug account for testing
during development but forget to disable it before launching the app. You now have a vulnerability in
your app that an attacker can potentially discover. There's a special type of vulnerability called a 0-day
vulnerability or zero day for short. Which is a vulnerability that is not known to the software
developer or vendor, but is known to an attacker. The name refers to the amount of time the software
vendor has had to react to and to fix the vulnerability, zero days.
Another key term is Exploit. Software that is used to take advantage of a security bug or vulnerability.
Attackers will write up exploits for vulnerabilities they find in software to cause harm to the system.
Let's say the attacker discovers a zero day vulnerability. She decides to take advantage of the
previously unknown bug and writes a zero day exploit code. That code will specifically target and take
advantage of this unknown bug to gain access and cause damage to systems. Not cool.
The next term to know is Threat. The possibility of danger that could exploit a vulnerability. Threats
are just possible attackers, sort of like burglars. Not all burglars will attempt to break into your home to
steal your most prized possessions, but they could, and so they're considered threats.

7
Next up, Hacker. A hacker in the security world is someone who attempts to break into or exploit a
system. Most of us associate hackers with malicious figures. But there are actually two common types
of hackers. You have black hat hackers, who try to get into systems to do something malicious. There
are also white hat hackers who attempt to find weaknesses in a system, but also alert the owners of
those systems so that they can fix it before someone else does something malicious. While there are
other types of hackers, these are the two main ones and the most important for us to understand right
now.
The last term to know is Attack. Which is an actual attempt at causing harm to a system. It's super
important to be aware of possible threats and vulnerabilities to your system so that you can better
prepare for them. The sad reality is that there will always be attacks on your system. But before you
start searching for an underground bunker to spend the rest of your days in, remember that there are
ways that you can detect and mitigate attacks and we're here to help you learn how to do just that. In
this module, we'll be talking about some of the common attacks that you will encounter at home and in
the workplace.
Throughout the course, you'll learn how to
harden your systems against these attacks.
Turns out, there are hundreds of ways that
your system can be attacked. But there are
also hundreds of ways that you can prevent
them. We won't talk about all of them, but we
will cover the major ones. So abandon the
bunker idea and prepare to dive in because
things are about to get real. Real secure.

Malicious Software
Malware is a type of malicious software that can be used to obtain your sensitive information or delete
or modify files. Basically, it can be used for any and all unwanted purposes. The most common types of
malware you'll see are viruses, worms, adware, spyware, Trojans, root kids, backdoors, botnets, oh my,
I know, I know, it's a long list, but we'll go into detail about each of these and even learn about some
real-life cases. But for now, let's talk about the most common forms of malware.
Viruses are the best known type of malware, and they work the same way that viruses in your body
work. When you get sick, a virus attaches itself to a healthy cell in your body, then replicates itself and
spreads to other healthy cells in your body, until bam! You're sneezing and wheezing and you're a mess.
In a computer virus, the virus attaches itself to some sort of executable code like a program. When the
program is running, it touches many files, each of which is now susceptible to being infected with the
virus. So, the virus replicates itself on these files, does the malicious work it's intended to do, and
repeats this over and over until it spreads as far as it can. Scary, right? Well, hold on tight, we're just
getting started.

8
Worms are similar to viruses except that instead of having to attach themselves onto something to
spread, worms can live on their own and spread through channels like the network. One case of a
famous computer worm was the ILOVEYOU or Love Bug which spread to millions of Windows
machines. The worm would spread via email. Someone would email a message with a subject line of I
Love You, and an attachment that was actually the worm disguised as a love letter text file. The text file
was actually an executable file that when opened would execute many attacks like copying itself to
several files and folders, launching other malicious software, replacing files, and then hiding itself after
it was done. The worm spread by stealing e-mail addresses that were in the victim's computer and chat
clients. It then proceeded to send that email out to everyone in the address book. The Love Bug spread
across the world and caused billions of dollars in damage, not so lovely. This was just one of the many
reasons why you should never open email attachments that you do not recognize.
Adware is one of the most visible forms of malware that you'll encounter, most of us see it every day.
Adware is just software that displays advertisements and collects data. Sometimes we legitimately
download adware. That happens when you agree to the terms of service that allows you to use free
software in exchange for showing you advertisements. Other times, it may get installed without your
consent and may do other malicious things than just display advertisements.
In Greek mythology, there's a famous tale of the invasion of the city of Troy. The Greeks, who had been
trying to gain access into the walled city, finally decided to hide themselves in a giant wooden statue of
a horse under the guise of a gift. The Trojans allowed the gift inside, then in the dead of night ,the
Greeks broke out of the statue and attacked the city. In computer security, we have malware that
functions like a Trojan horse, and it's named after this exact thing. A Trojan is malware that disguises
itself as one thing but does something else. Just like how the historical Trojan horse was accepted into
the city by the citizens of Troy. A computer Trojan has to be accepted by the user, meaning the program
has to be executed by the user. No one would willingly install malware on their machine, that's why
trojans are meant to entice you to install them by disguising themselves as other software.
Spyware is the type of malware that's meant to spy on you. Which could mean monitoring your
computer screens, key presses, webcams, and then reporting or streaming all of this information to
another party, it's not good. A keylogger is a common type of spyware that's used to record every
keystroke you make. It can capture all of the messages you type, your confidential information, your
passwords, and even more.
Ransomware is a type of attack that holds your data or system hostage until you pay some sort of
ransom. Remember the availability principle we learned about in the first video? Does this attack sound
like a way to decrease the availability of our security? Bingo! That's because it is. A recent case of
ransomware was the WannaCry ransomware attack in May of 2017. The malware took advantage of a
vulnerability in older Windows systems, infecting hundreds of thousands of machines across the world.
Most notably, the attack shutdown the systems for the National Health Services in England, causing a
health-related crisis. The WannaCry ransomware attack devastated systems around the world. These
types of attacks are becoming more common and we need to be ready to fight them, so let's soldier on.

9
Malware Continued
Let's pick up where we left off with Malware. So far, we've covered some of the major types of
Malware that can be found in the system, including Malware, Viruses, Worms, Adware, Spyware
and Ransomware.
What if our attackers could not only do malicious things like steal our data, but they could also steal
our computers resources like the CPU? Well, I'm sorry to tell you that actually exists. There is Malware
out there that can utilize someone else's machine to perform a task that is centrally controlled by the
attacker. These compromised machines are known as Bots. If there are a collection of one or more
Bots, we call that network of devices a Botnet. Botnets are designed to utilize the power of the
Internet-connected machines to perform some distributed function. Take mining Bitcoin, for example,
mining Bitcoin requires a machine to perform some computation that takes up your machine's
resources. At the end, you may be rewarded with some amount of Bitcoin. A popular attack has been
creating Botnets to do stuff like mine Bitcoins. So instead of having one computer run computations,
attackers can now have a thousand computers running computations and raking in more and more
Bitcoin.
A backdoor is a way to get into a system if the other methods to get in a system aren't allowed, it's a
secret entryway for attackers. Backdoors are most commonly installed after an attacker has gain access
to your system and wants to maintain that access. Even if you discovered your system has been
compromised, you may not realize that a backdoor to your system exists. If it does, you need to lock it
up before more damage can be done.
Another form of Malware that can be particularly problematic is a rootkit. A rootkit by its name is a kit
for root, meaning a collection of software or tools that an admin would use. It allows admin level
modification to an operating system. A rootkit can be hard to detect because it can hide itself from the
system using the system itself. Sneaky little sucker. The rootkit can be running lots of malicious
processes, but at the same time those processes wouldn't show up in task manager because it can hide
its own presence.
A logic bomb is a type of Malware that's intentionally installed, after a certain event or time has
triggered, it will run the malicious program. There's a popular logic bomb case that happened in 2006,
wherein an unhappy systems administrator at a bank, set off a logic bomb and brought down a
company's services in an attempt to drop their stock prices. The former employee was caught and
charged with fraud, then sentenced to eight years in prison. Not the most logical Logic bomb.

10
Network Attacks
Network Attacks
A network attack that is simple in concept, but can cause a lot of damage is a DNS Cache Poisoning
attack. You probably remember from the bits and bytes of computer networking course, that DNS
works by getting information about IP addresses and names to make it easier for you to find a website.
A DNS Cache Poisoning attack works by tricking a DNS server into accepting a fake DNS record that
will point you to a compromised DNS server. It then feeds you fake DNS addresses when you try to
access legitimate websites. Not only that, DNS Cache Poisoning can spread to other networks too. If
other DNS servers are getting their DNS information from a compromised server, they'll serve those
bad DNS entries to other hosts.
Several years ago, there was a large scale DNS Cache Poisoning attack in Brazil. It appeared that
attackers managed to poison the DNS cache of some local ISPs, by inserting fake DNS records for
various popular websites like Google, Gmail, or Hotmail. When someone attempted to visit one of
those sites, they were served a fake DNS record and were sent to a server that the attacker controlled,
which hosted a small java applet. The user would then be tricked into installing the applet, which was
actually a malicious banking trojan designed to steal banking credentials. This is an example of the real
world damage DNS Cache Poisoning attacks can pose. You can learn more about it in the next
supplementary reading.
A man-in-the-middle attack, is an attack that places the attacker in the middle of two hosts that think
they're communicating directly with each other. It's clearly a name that needs some updating, men
aren't the only hackers out there. The attack will monitor the information going to and from these hosts,
and potentially modify it in transit. A common man-in-the-middle attack is a session hijacking or
cookie hijacking. Let's say you log into a website and forget to log out. Now, you've already
authenticated yourself to the website and generated a session token that grants you access to that
website. If someone was performing a session hijacking, they could steal that token and impersonate
you on the website, and no one wants that. This is another reason to think about the CIA's of security,
you always want to make sure that the data that you are sending or receiving has integrity and isn't
being tampered with. Another way a man-in-the-middle attack can be established is a rogue access
point attack. A Rogue AP is an access point that is installed on the network without the network
administrator's knowledge. Sometimes, in corporate environments, someone may plug a router into
their corporate network to create a simple wireless network. Innocent enough, right? Wrong. This can
actually be pretty dangerous, and could grant unauthorized access to an authorized secure network.
Instead of an attacker having to gain access to a network by plugging directly into a network port, they
can just stand outside the building and hop onto this wireless network. A final man-in-the-middle
method will cover is called an evil twin. It's similar to the rogue AP example but has a small but
important difference. The premise of an evil twin attack is for you to connect to a network that is
identical to yours. This identical network is our networks evil twin and is controlled by our attacker.
Once we connect to it, they will be able to monitor our traffic. I wonder if Fred Weasley ever did this to
George, probably not, they were wizards. They could just magic their way out of problems. Must be
nice.

11
Denial-of-Service
A Denial-of-Service, or DoS attack, is an attack that tries to prevent access to a service for legitimate
users by overwhelming the network or server. Think about how you normally get on a website. Most
major websites are capable of serving millions of users. But for this example, imagine you have a
website that could only serve 10 users. If someone was performing a Denial-of-Service attack, they
would just take up all 10 of those spots, and legitimate users would have been denied the service,
because there's no more room for them. Now apply that to a website like Google, or Facebook. DoS
attacks try to take up those resources of a service, and prevent real users from accessing it, not a pretty
picture.
The Ping of Death or POD, is a pretty simple example of a DoS attack. It works by sending a
malformed ping to a computer. The ping would be larger in size than what the internet protocol was
made to handle. So it results in a buffer overflow. This can cause the system to crash and potentially
allow the execution of malicious code.
Another example is a ping flood, which sends tons of ping packets to a system. More specifically, it
sends ICMP echo requests, since a ping expects an equal number of ICMP echo replies. If a computer
can't keep up with this, then it's prone to being overwhelmed and taken down. Not cool ping flood, not
cool.
Similar to a ping flood is a SYN flood. Remember that to make a TCP connection, a client sends a
SYN packet to a server he wants to connect to. Next, the server sends back a SYN-ACK message, then
the client sends in ack message. In a SYN flood, the server is being bombarded with the SYN packets.
The server is sending back SYN-ACK packets but the attacker is not sending ack messages. This
means that the connection stays open and is taking up the service resources. Other users will be unable
to connect to the server which is a big problem. Since the TCP connection is half-open, we also refer to
SYN floods as half-open attacks, sounds messy, right?
It is, the DoS attacks we've learned about so far only use a single machine to carry out an attack. But
what if attackers could utilize multiple machines? A much scarier scenario, they'd be able to take down
services in greater volumes and even quicker rates. Even scarier, attackers can absolutely do that. A
DoS attack using multiple systems, is called a distributed denial-of-service attack or DDoS. DDoS
attacks need a large volume of systems to carry out an attack and they're usually helped by botnet
attackers. In that scenario, they can gain access to large volumes of machines to perform an attack. In
October of 2016, a DDoS attack occurred the DNS service provider, Dyn was a target of a DDoS. Fake
DNS look up requests along with SYN floods that botnets to performing overloaded their system. Dyn
handled the DNS for major website like Reddit, GitHub, Twitter, etc. So once that went down, it also
took down its customers, making those services inaccessible. Don't get between people on the Reddit
threads or Twitter feeds, I know from experience, it's not pretty.

12
Other Attacks
Client-Side Attacks
We've talked a lot about security attacks that target victims directly, but they aren't the only type of
attacks that occur in the web. One day, you may find yourself in software development or software
engineering and you'll need to know about these other types of attacks in order to ensure the security of
your work. A common security exploit that can occur in software development and runs rampant on the
web is the possibility for an attacker to inject malicious code. We refer to these types of attacks as
injection attacks. So how do injection attacks work? Great question.
For simplicity's sake, we won't get into the details of the code implementation, but imagine a car. You
keep your car running by putting gas in it. Now, consider someone who wants to do something
malicious to that car. That person could inject your gas tank with a strawberry banana milkshake. While
that may sound delicious, it could also ruin your car. So how do you fight against that?
A hypothetical method to prevent this is adding a mechanism to your car that only accepts gasoline and
no other liquids. Injection attacks in websites work the exact same way, except without the
mouthwatering strawberry banana milkshakes, and without having overly complex solutions. Injection
attacks can be mitigated with good software development principles, like validating input and
sanitizing data. Is anyone else getting hungry? Milkshake break? No? Okay, we'll move on.
Cross-site scripting, or XSS attacks, are a type of injection attack where the attacker can insert
malicious code and target the user of the service. XSS attacks are a common method to achieve a
session hijacking.

It would be as simple as embedding a malicious script in a website, and the user unknowingly executes
the script in their browser. The script could then do malicious things like steal a victims cookies and
have access to a log in to a website. Mm, cookies.
Another type of injection attack is a SQL, or S-Q-L, injection attack. Unlike an XSS that targets a
user, a SQL injection attack targets the entire website if the website is using a SQL database.

Attackers can potentially run SQL commands that allow them


to delete website data, copy it, and run other malicious
commands. Now that that's out of the way, it's snack time.

13
Password Attacks
There is no getting around it, passwords are the most secure common safeguards we have to prevent
unauthorized account access. Unfortunately, our passwords may not be as secure or strong as they
should be. A common attack that occurs to gain access to an account is a password attack. Password
attacks utilize software like password crackers that try and guess your password. And they work
extremely well, so don't try to reuse that fido password. It didn't secure your bank account and it's not
going to work here. Okay, moving on.
A common password attack is a brute force attack, which just continuously tries different
combinations of characters and letters until it gets access. Since this attack requires testing a lot of
combinations of passwords, it usually takes a while to do this. Have you ever seen a CAPTCHA when
logging into a website? CAPTCHAs are used to distinguish a real human from a machine. They ask
things like, are you human, or are you a robot, or are you a dancer? In a password attack, if you didn't
have a CAPTCHA available, an automated system can just keep trying to log into your account until it
found the right password combination. But with a CAPTCHA, it prevents these attacks from executing.
Another type of password attack is a dictionary attack. A dictionary attack doesn't test out brute force
combinations like abc1 or capital ABC1. Instead, it tries out words that are commonly used in
passwords, like password, monkey, football. The best way to prevent a password attack is to utilize
strong passwords. Don't include real words you would find in a dictionary and make sure to use a mix
of capitals, letters, and symbols. Without any fail-safes like CAPTCHAs or other account protections, it
would take a typical password cracker application about one minute to crack a password like
sandwich. But substantially longer to crack something like what you see here, spelled s, @, n, capital
D, w, h, number 1, c, then another h. See how that's the same but also way harder to crack?

Deceptive Attacks
Get ready because we're about to dive into one of the least technical but most disturbing attacks that
can be done, social engineering. Social engineering is an attack method that relies heavily on
interactions with humans instead of computers. You can harden your defenses as much as you want.
You can spend millions of dollars on State of the Art Security Infrastructure. But if Susan the systems
administrator has all the access to your system, and gets tricked into handling over her credentials,
there's nothing you can do to stop it. As we've learned from the greatest sci-fi movies, humans will
always be the weakest link in life, and in your security system. Social engineering is a kind of con
game where attackers use deceptive techniques to gain access to personal information. They then try to
have a user execute something, and basically scam a victim into doing that thing. A popular type of
social engineering attack is a phishing attack. Phishing usually occurs when a malicious email is sent
to a victim disguised as something legitimate. One common phishing attack is an email, saying your
bank account has been compromised. And then, gives you a link to click on to reset your password.
When you go to the link, it looks like your bank's website but it's actually a fake website. So you're
tricked into entering your current password and credentials in order to reset your current password.
Another variation of phishing is spear phishing. Both phishing schemes have the same end goals, but
spearfishing specifically targets individual or group. The fake emails may contain some personal
information like your name, or the names of friends or family. So they seem more trustworthy.

14
Another popular social engineering attack is email spoofing. Spoofing is when a source is
masquerading around as something else. Think of an email spoof. This is what happens when you
receive an email with a misleading sender address. You can send an email and have it appear to come
from anywhere you want, whether it exists or not. Imagine if you open that email you thought was from
your friend Brian. Brian's real email address is in the front part and the email says that you have to
check out this funny link. Well, you know Brian. He's pretty awesome and he always said super funny
emails, so you click on the link. Suddenly, you have malware installed. And you're probably not feeling
so awesome about Brian right now.
Not all social engineering occurs digitally. In fact, one attack happens through actual physical contact.
This is called baiting, which is used to entice a victim to do something. For example, an attacker could
just leave a USB drive somewhere in hopes that someone out there will plug it into their machine to see
what's on it. But they've just installed malware on the machine without even knowing it.
Another popular attack that can occur offline is called tailgating, which is essentially gaining access
into a restricted area or building by following a real employee in. In most corporate environments,
building access is restricted through the use of a keycard or some other entry method. But a tailgater
could use social engineering tactics to trick an employee into thinking that they're there for a legitimate
reason like doing maintenance on the building, or delivering packages. Once a tailgater is in, they have
physical access to your corporate assets.
Pretty scary stuff we've covered so far huh? I bet you didn't realize that there were so many ways to
compromise security. Hopefully, you've gained a better grasp on the common attacks out there, and
signs and what to look for. Now that you've been exposed to the fundamental types of security threats,
we'll dive deep into best practices for security and how to create technical implementations for secure
systems. But first up, we're going to test your knowledge with a quiz covering the different attacks
we've talked about in this module.

15
Chapter 1Test Questions and Answers
Question 1: A network-based attack where one attacking machine overwhelms a target with traffic is a(n) _______ attack.
Denial of Service
You got it! This is a classic denial-of-service attack. Note that this is not a distributed denial-of-service attack, as the attack
traffic is coming from a single source and not distributed over many attacking hosts

Question 2: Phishing, baiting, and tailgating are examples of ________ attacks.


Social engineering
Yep! These three attack types are designed to trick or deceive people into trusting an attacker. Phishing accomplishes this
via email, baiting uses physical props like USB drives, and tailgating happens when the attacker follows you into a
restricted area.

Question 3: Which of the following are examples of injection attacks? Check all that apply.
XSS attack
Correct! An XSS attack is when an attacker injects a malicious script into a web page, tricking the victim into running the
script. A SQL injection attack is when an attacker is able to inject valid SQL commands into a text input field
SQL injection attack
Correct! An XSS attack is when an attacker injects a malicious script into a web page, tricking the victim into running the
script. A SQL injection attack is when an attacker is able to inject valid SQL commands into a text input field

Question 4: Botnets are designed to steal _____ from the victim.


Computing resources
That's right! Botnets are designed to infect machines and put them under the control of the botnet controller. They're used to
perform computational or network tasks to steal computing resources, like CPU time, memory, and network bandwidth,
from the victim.

Question 5: When cleaning up a system after a compromise, you should look closely for any ______ that may have been
installed by the attacker.
Backdoors
Wohoo! Attackers commonly install backdoors in systems that they compromise to maintain access to the system even after
the vulnerability they exploited originally gets patched.

Question 6: An attacker could redirect your browser to a fake website login page using what kind of attack?
DNS cache poisoning attack
Great job! A DNS cache poisoning attack would allow an attacker to redirect your requests for websites to a server they
control.

16
Question 7: An attack that would allow someone to intercept your data as it's being sent or received is called a(n)
_________ attack.
Man-in-the-middle
Correct! A man-in-the-middle attack allows the attacker to monitor and potentially redirect your traffic.

Question 8: A(n) _____ attack is meant to prevent legitimate traffic from reaching a service.
Denial of Service
Yes! A DoS, or denial-of-service, attack is meant to prevent legitimate traffic from reaching a service.

Question 9: A SYN flood occurs when the attacker overwhelms a server with ______.
SYN packets
Nice work! A SYN flood attack happens when the attacker floods the victim with SYN packets and never completes the
TCP three-way handshake.

Question 10: What makes a DDoS attack different from a DoS attack? Check all that apply.
A DoS attack has attack traffic coming from one source.
That's right! The extra "D" in DDoS stands for "Distributed." This means the attack traffic is distributed among a larger
number of attacking machines.
A DDoS attack has attack traffic coming from many different sources.
That's right! The extra "D" in DDoS stands for "Distributed." This means the attack traffic is distributed among a larger
number of attacking machines.

Question 11: The best defense against injection attacks is to ______.


Use input validation
You nailed it! Input validation will prevent an attacker from injecting commands using text input fields.

Question 12: The best defense against password attacks is using strong _______.
Passwords
Great job! Strong passwords will make password attacks too time-consuming to be viable for an attacker.

Question 13:The practice of tricking someone into providing information they shouldn't is called ________.
Social engineering
Yes! Social engineering is the practice of manipulating people to get them to do things or divulge information that they
shouldn't.

17
Chapter 2: Pelcgbybtl (Cryptology!)
In the second week of this course, we'll learn about cryptology. We'll explore different types of
encryption practices and how they work. We'll show you the most common algorithms used in
cryptography and how they've evolved over time. By the end of this module, you'll understand how
symmetric encryption, asymmetric encryption, and hashing work; you'll also know how to choose the
most appropriate cryptographic method for a scenario you may see in the workplace.

Learning Objectives
• Understand the how symmetric encryption, asymmetric encryption, and hashing work.
• Describe the most common algorithms of cryptography.
• Choose the most appropriate cryptographic method given a scenario.

18
Symmetric Encryption
Cryptography
When you were little, did you and your siblings ever communicate in a secret language around your
parents? It didn't really matter what you were talking about, as long as your parents didn't know what it
was. That was the fun part, right? It may have seemed like a fun game when you were younger but for
as long as humans have been around, we've created ways to keep messages secret from others.
In this lesson, we'll cover how this plays out through symmetric encryption, asymmetric encryption,
and hashing. We'll also go over how to describe the most common algorithms in cryptography and learn
how to choose the most appropriate cryptographic method in any given scenario. But before we dive
into the nitty-gritty details of cryptography, the various types that exist in our applications, let's go over
some terminology and general principles that will help you understand the details later.
The topic of cryptography, or hiding messages from potential enemies, has been around for thousands
of years. It's evolved tremendously with the advent of modern technology, computers and
telecommunications. Encryption is the act of taking a message, called plaintext, and applying an
operation to it, called a cipher. So that you receive a garbled, unreadable message as the output, called
ciphertext. The reverse process, taking the garbled output and transforming it back into the readable
plain text is called decryption. For example, let's look at a simple cipher, where we substitute e for o
and o for y. We'll take the plaintext Hello World and feed it into our basic cipher. What do you think the
resulting ciphertext will be? Hopefully, you've got Holly Wyrld. It's pretty easy to decipher the
ciphertext since this is a very basic example. There are much more complex and secure ciphers or
algorithms that we'll cover later in the section. A cipher is actually made up of two components, the
encryption algorithm and the key. The encryption algorithm is the underlying logic or process that's
used to convert the plaintext into ciphertext.
These algorithms are usually very complex mathematical
operations. But there are also some very basic algorithms that
we can take a closer look at that don't necessarily require a PhD
in math to understand. The other crucial component of a cipher
is the key, which introduces something unique into your cipher.
Without the key, anyone using the same algorithm would be able
to decode your message, and you wouldn't actually have any
secrecy. So to recap, first you pick an encryption algorithm
you'd like to use to encode your message, then choose a key.
Now you have a cipher, which you can run your plaintext
message through and get an encrypted ciphertext out ready to be
sent out into the world, safe and secure from prying eyes.
Doesn't this make you feel like an international person of mystery? Just wait, given that the underlying
purpose of cryptography is to protect your secrets from being read by unauthorized parties, it would
make sense that at least some of the components of a cipher would need to be kept secret too, right?
You can keep the argument that by keeping the algorithm secret, your messages are secured from a
snooping third party, and technically you wouldn't be wrong. This general concept is referred to as,
security through obscurity, which basically means, if no one knows what algorithm were using or
general security practice, then we're safe from attackers. Think of hiding your house key under you
doormat, as long as the burglar doesn't know that you hide the spare key under the mat, you're safe. But

19
once that information is discovered, all security goes out the window along with your valuables. So
clearly, security through obscurity isn't something that you should rely on for securing communication
or systems, or for your house for that matter.
This overall concept of cryptography is referred to as Kerchoff's principle. This principle states that a
cryptosystem, or a collection of algorithms for key generation and encryption and decryption
operations that comprise a cryptographic service should remain secure, even if everything about the
system is known except for the key. What this means is that even if your enemy knows the exact
encryption algorithm you use to secure your data, they're still unable to recover the plaintext from an
intercepted ciphertext. You may also hear this principle referred to as Shannon's maxim or the enemy
knows the system. The implications are the same. The system should remain secure, even if your
adversary knows exactly what kind of encryption systems you're employing, as long as your keys
remain secure.
We already defined encryption, but the overarching discipline that covers the practice of coding and
hiding messages from third parties is called cryptography. The study of this practice is referred to as
cryptology. The opposite of this looking for hidden messages or trying to decipher coded message is
referred to as cryptanalysis. These two fields have co-evolved throughout history with new ciphers
and cryptosystems being developed as previous ones were broken or found to vulnerable. One of the
earliest recorded descriptions of cryptanalysis is from a ninth century Arabian mathematician. Who
described a method for frequency analysis to break coded messages. Frequency analysis is the practice
of studying the frequency with which letters appear in ciphertext. The premise behind this type of
analysis is that in written languages, certain letters appear more frequently than others, and some letters
are more commonly grouped together than others. For example, the most commonly used letters in the
English language are e, t, a, and o. The most commonly seen pairs of these letters are th, er, on, and an.
Some ciphers, especially classical transposition and substitution ciphers preserve the relative frequency
of letters in the plaintext. And so our potentially vulnerable to this type of analysis. During World War I
and World War II, cryptography and cryptanalysis played an increasingly important role. There was a
shift away from linguistics and frequency analysis and a move towards more mathematical based
analysis. This was due to more complex and sophisticated ciphers being developed. A major turning
point in the field of cryptanalysis was during World War II when the US allies began to incorporate
sophisticated mathematics to aid in breaking access encryption schemes. This also saw the first use of
automation technology applied to cryptanalysis in England at Bletchley Park. The first programmable
digital computer, named Colossus, was developed to aid in this effort. While early computers were
applied to breaking cryptography, this opened the door for a huge leap forward and a development of
even more sophisticated and complex cryptosystems. Steganography is a related practice but distinctly
different from cryptography. It's the practice of hiding information from observers, but not encoding it.
Think of writing a message using invisible ink. The message is in plaintext and no decoding is
necessary to read the text but the text is hidden from sight. The ink is invisible and must be made
visible using a mechanism known to the recipient. Modern steganographic techniques include
embedding messages and even files into other files like images or videos. To a casual observer, they
would just see a picture of a cute puppy but if you feed that image into steganography software, it
would extract a message hidden within the image file. What's not so secret is how fun it is to learn
about all of this spy stuff, don't you think? Stick around, because next, we'll talk about specific
cryptographic methods and systems.

20
Symmetric Cryptography
So far, we've been talking pretty generally about cryptographic systems and focusing primarily on
encryption concepts but not decryption. It makes sense that if you're sending a protected message to
someone, you'd want your recipient to be able to decode the message and read it, and maybe even reply
with a coded message of their own. So let's check out the first broad category of encryption algorithms
and dive into more details about how it works along with some pros and cons.
When we covered Kerchhoff's principle earlier, do you remember which component of the cipher is
crucial to keep secret? That's right. The key must be kept private to ensure that an eavesdropper
wouldn't be able to decode encrypted messages. In this scenario, we're making the assumption that the
algorithm in use is what's referred to as Symmetric-key algorithm. These types of encryption
algorithms are called symmetric because they use the same key to encrypt and decrypt messages. Let's
take a simple example of a symmetric key encryption algorithm to walk through the overall process of
encrypting and decrypting a message. A substitution cipher is an encryption mechanism that replaces
parts of your plaintext with ciphertext. Remember our hello world example from earlier. That's an
example of substitution cipher since we're substituting some characters with different ones. In this case,
the key would be the mapping of characters between plaintext and ciphertext without knowing what
letters get replaced with you wouldn't be able to easily decode the ciphertext and recover the plaintext.
If you have the key or the substitution table, then you can easily reverse the process and decrypt the
coded message by just performing the reverse operation. A well-known example of a substitution
cipher is the Caesar Cipher, which is a substitution alphabet. In this case, you're replacing characters
in the alphabet with others usually by shifting or rotating the alphabet, a set of numbers or characters.
The number of the offset is the key. Another popular example of this is referred to as R O T 13 or ROT-
13, where the alphabet is rotated 13 places, but really ROT-13 is a Caesar cipher that uses a key of 13.
Let's go back to our hello world example and walk through encoding it using our ROT-13 cipher. Our
ciphertexts winds up being
URYYB JBEYQ. To reverse this process and
go back to the plaintext, we just performed the
reverse operation by looking up the characters
in the output side of the mapping table. You
might notice something about the ROT-13
mapping table or the fact that we're offsetting
the alphabet by 13 characters. Thirteen is
exactly half of the alphabet. This results in the
ROT-13 cipher being an inverse of itself. What
this means is that you can recover the plaintext from ciphertext by performing the ROT-13 operation on
the ciphertext.

21
If we were to choose a different key, let's say eight, can we do the same thing? Let's check. Here's the
mapping table for an offset of eight, which gives us the ciphertext of OLSSV DVYSK.
If we run this through the cipher once more, we
get the following output VSZZC KCFZR. That
doesn't work to reverse the encryption process,
does it?

There are two more categories that symmetric key ciphers can be placed into. They're either block
ciphers or they're stream ciphers. This relates to how the ciphers operate on the plaintext to be
encrypted. A Stream cipher as the name implies, takes a stream of input and encrypts the stream one
character or one digit at a time, outputting one encrypted character or digit at a time. So, there's a one-
to-one relationship between data in and encrypted data out.
The other category of symmetric ciphers is block ciphers. The cipher takes data in, places that into a
bucket or block of data that's a fixed size, then encodes that entire block as one unit. If the data to be
encrypted isn't big enough to fill the block, the extra space will be padded to ensure the plaintext fits
into the blocks evenly.
Now generally speaking, stream ciphers are faster and less complex to implement, but they can be less
secure than block ciphers if the key generation and handling isn't done properly, if the same key is used
to encrypt data two or more times, it's possible to break the cipher and to recover the plaintext. To avoid
key reuse, initialization vector or IV is used. That's a bit of random data that's integrated into the
encryption key and the resulting combined key is then used to encrypt the data. The idea behind this is
if you have one shared master key, then generate a one-time encryption key. That encryption key is
used only once by generating a new key using the master one and the IV. In order for the encrypted
message to be decoded, the IV must be sent in plaintext along with the encrypted message. A good
example of this can be seen when inspecting the 802.11 frame of a WEP encrypted wireless packet. The
IV is included in plaintext right before the encrypted data payload (Example illustrated below).

In the next video, we'll explore


symmetric encryption in more detail,
illustrating some of the more popular
algorithms and dive into the pros and
cons of using symmetric encryption.

22
Symmetric Encryption Algorithms
In the last section, we covered the basics of what exactly symmetric encryption algorithms are and gave
a basic example of the Caesar cipher, a type of substitution cipher. We couldn't possibly protect
anything of value using the cipher though, right? There must be more complex and secure symmetric
algorithms, right? Of course, there are. One of the earliest encryption standards is DES, which stands
for Data Encryption Standard. DES was designed in the 1970s by IBM, with some input from the US
National Security Agency. DES was adopted as an official FIPS, Federal Information Processing
Standard for the US. This means that DES was adopted as a federal standard for encrypting and
securing government data. DES is a symmetric block cipher that uses 64-bit key sizes and operates on
blocks 64-bits in size. Though the key size is technically 64-bits in length, 8-bits are used only for
parity checking, a simple form of error checking. This means that real world key length for DES is only
56-bits.
A quick note about encryption key sizes since we haven't covered that yet. In symmetric encryption
algorithms, the same key is used to encrypt as to decrypt, everything else being the same.
The key is the unique piece that protects your data and the symmetric key must be kept secret to ensure
the confidentiality of the data being protected. The key size, defined in bits, is the total number of bits
or data that comprises the encryption key. So you can think of the key size as the upper limit for the
total possible keys for a given encryption algorithm. Key length is super important in cryptography
since it essentially defines the maximum potential strength of the system. Imagine an ideal symmetric
encryption algorithm where there are no flaws or weaknesses in the algorithm itself. In this scenario,
the only possible way for an adversary to break your encryption would be to attack the key instead of
the algorithm. One attack method is to just guess the key and see if the message decodes correctly. This
is referred to as a brute-force attack. Longer key lengths protect against this type of attack. Let's take
the DES key as an example. 64-bits long minus the 8 parity bits gives us a key length of 56-bits. This
means that there are a maximum of 2 to the 56th power, or 72 quadrillion possible keys. That seems
like a ton of keys, and back in the 1970s, it was. But as technology advanced and computers got faster
and more efficient, 64-bit keys quickly proved to be too small. What were once only theoretical attacks
on a key size became reality in 1998 when the EFF, Electronic Frontier Foundation, decrypted a DES-
encrypted message in only 56 hours. Because of the inherent weakness of the small key size of DES,
replacement algorithms were designed and proposed. A number of new ones appeared in the 1980s and
1990s. Many kept the 64-bit block size, but used a larger key size, allowing for easier replacement of
DES. In 1997, the NIST, National Institute of Standards and Technology, wanted to replace DES
with a new algorithm, and in 2001, adopted AES, Advanced Encryption Standard, after an
international competition. AES is also the first and only public cipher that's approved for use with top
secret information by the United States National Security Agency. AES is also a symmetric block
cipher similar to DES in which it replaced. But AES uses 128-bit blocks, twice the size of DES blocks,
and supports key lengths of 128-bit, 192-bit, or 256-bit. Because of the large key size, brute-force
attacks on AES are only theoretical right now, because the computing power required (or time required
using modern technology) exceeds anything feasible today. I want to call out that these algorithms are
the overall designs of the ciphers themselves. These designs then must be implemented in either
software or hardware before the encryption functions can be applied and put to use. An important thing
to keep in mind when considering various encryption algorithms is speed and ease of
implementation. Ideally, an algorithm shouldn't be overly difficult to implement because complicated
implementation can lead to errors and potential loss of security due to bugs introduced in
implementation.

23
Speed is important because sometimes data will be encrypted by running the data through the cipher
multiple times. These types of cryptographic operations wind up being performed very often by
devices, so the faster they can be accomplished with the minimal impact to the system, the better. This
is why some platforms implement these cryptographic algorithms in hardware to accelerate the
processes and remove some of the burden from the CPU. For example, modern CPUs from Intel or
AMD have AES instructions built into the CPUs themselves. This allows for far greater computational
speed and efficiency when working on cryptographic workloads.
Let's talk briefly about what was once a wildly used and popular algorithm but has since been proven to
be weak and is discouraged from use. RC4, or Rivest Cipher 4, is a symmetric stream cipher that
gained widespread adoption because of its simplicity and speed. RC4 supports key sizes from 40-bits to
2,048-bits. So the weakness of RC4 aren't due to brute-force attacks, but the cipher itself has inherent
weaknesses and vulnerabilities that aren't only theoretically possible, there are lots of examples
showing RC4 being broken. A recent example of RC4 being broken is the RC4 NOMORE attack. This
attack was able to recover an authentication cookie from a TLS-encrypted connection in just 52 hours.
As this is an attack on the RC4 cipher itself, any protocol that uses this cipher is potentially vulnerable
to the attack. Even so, RC4 was used in a bunch of popular encryption protocols, like WEP for wireless
encryption, and WPA, the successor to WEP. It was also supported in SSL and TLS until 2015 when
RC4 was dropped in all versions of TLS because of inherent weaknesses. For this reason, most major
web browsers have dropped support for RC4 entirely, along with all versions of SSL, and use TLS
instead.
The preferred secure configuration is TLS 1.2 with AES GCM, a specific mode of operation for the
AES block cipher that essentially turns it into a stream cipher. GCM, or Galois/Counter Mode, works
by taking randomized seed value, incrementing this and encrypting the value, creating sequentially
numbered blocks of ciphertexts. The ciphertexts are then incorporated into the plain text to be
encrypted. GCM is super popular due to its security being based on AES encryption, along with its
performance, and the fact that it can be run in parallel with great efficiency. You can read more about
the RC4 NOMORE attack in the next reading (http://www.rc4nomore.com/).
So now that we have covered symmetric encryption and some examples of symmetric encryption
algorithms, what are the benefits or disadvantages of using symmetric encryption? Because of the
symmetric nature of the encryption and decryption process, it's relatively easy to implement and
maintain. That's one shared secret that you have to maintain and keep secure. Think of your Wi-Fi
password at home. There's one shared secret, your Wi-Fi password, that allows all devices to connect to
it. Can you imagine having a specific Wi-Fi password for each device of yours? That would be a
nightmare and super hard to keep track of. Symmetric algorithms are also very fast and efficient at
encrypting and decrypting large batches of data. So what are the downsides of using symmetric
encryption? While having one shared secret that both encrypts and decrypts seems convenient up front,
this can actually introduce some complications. What happens if your secret is compromised? Imagine
that your Wi-Fi password was stolen and now you have to change it. Now you have to update your Wi-
Fi password on all your devices and any devices your friends or family might bring over. What do you
have to do when a friend or family member comes to visit and they want to get on your Wi-Fi? You
need to provide them with your Wi-Fi password, or the shared secret that protects your Wi-Fi network.
This usually isn't an issue since you hopefully know the person and you trust them, and it's usually only
one or two people at a time. But what if you had a party with 50 strangers? Anyhow, how could you
provide the Wi-Fi password only to the people you trust without strangers overhearing? In the next
lesson, we'll explore other ways besides symmetric key algorithms to protect data and information.

24
Public Key or Asymmetric Encryption
Asymmetric Cryptography
In the previous lesson, we covered one of two major categories that encryption ciphers fall into,
symmetric key ciphers.
In this next lesson, we'll cover the second class of ciphers called asymmetric or public key ciphers.
Remember why symmetric ciphers are referred to as symmetric? It's because the same key is used to
encrypt as to decrypt. This is in contrast to Asymmetric encryption systems because as the name
implies, different keys are used to encrypt and decrypt.
So how exactly does that work? Well, let's imagine here that there are two people who would like to
communicate securely, we'll call them Suzanne and Daryll. Since they're using asymmetric encryption
in this example, the first thing they each must do is generate a private key, then using this private key, a
public key is derived. The strength of the asymmetric encryption system comes from the computational
difficulty of figuring out the corresponding private key given a public key. Once Suzanne and Daryll
have generated private and public key pairs, they exchange public keys. You might have guessed from
the names that the public key is public and can be shared with anyone, while the private key must be
kept secret. When Suzanne and Daryll have exchanged public keys, they're ready to begin exchanging
secure messages. When Suzanne wants to send Daryll an encrypted message, she uses Daryll's public
key to encrypt the message and then send the ciphertext. Daryll can then use his private key to decrypt
the message and read it, because of the relationship between private and public keys, only Daryll's
private key can decrypt messages encrypted using Daryll's public key. The same is true of Susanne's
key pairs. So when Daryll is ready to reply to Suzanne's message, he'll use Suzanne's public key to
encode his message and Suzanne will use her private key to decrypt the message. Can you see why it's
called asymmetric or public key cryptography?
We've just described encryption and decryption operations using an asymmetric cryptosystem, but
there's one other very useful function the system can perform, public key signatures. Let's go back to
our friends Suzanne and Daryll. Let's say, Suzanne wants to send a message to Darryll and she wants to
make sure that Daryll knows the message came from her and no one else, and that the message was not
modified or tampered with. She could do this by composing the message and combining it with her
private key to generate a digital signature. She then sends this message along with the associated digital
signature to Daryll. We're assuming Suzanne and Daryll have already exchanged public keys
previously in this scenario. Daryll can now verify the message's origin and authenticity by combining
the message, the digital signature, and Suzanne's public key. If the message was actually signed using
Susanne's private key and not someone else's and the message wasn't modified at all, then the digital
signature should validate. If the message was modified, even by one whitespace character, the
validation will fail and Daryll shouldn't trust the message. This is an important component of the
asymmetric cryptosystem. Without message verification, anyone could use Daryll's public key and send
him an encrypted message claiming to be from Suzanne. The three concepts that an asymmetric
cryptosystem grants us are confidentiality, authenticity, and non-repudiation.
Confidentiality is granted through the encryption-decryption mechanism. Since our encrypted data is
kept confidential and secret from unauthorized third parties.
Authenticity is granted by the digital signature mechanism, as the message can be authenticated or
verified that it wasn't tampered with.

25
Non-repudiation means that the author of the message isn't able to dispute the origin of the message.
In other words, this allows us to ensure that the message came from the person claiming to be the
author.
Can you see the benefit of using an asymmetric encryption algorithm versus a symmetric one?
Asymmetric encryption allows secure communication over an untrusted channel, but with symmetric
encryption, we need some way to securely communicate the shared secret or key with the other party. If
that's the case, it seems like asymmetric encryption is better, right? Well, sort of.
While asymmetric encryption works really well in untrusted environments, it's also computationally
more expensive and complex. On the other hand, symmetric encryption algorithms are faster, and more
efficient, and encrypting large amounts of data. In fact, what many secure communications schemes do
is take advantage of the relative benefits of both encryption types by using both, for different purposes.
An asymmetric encryption algorithm is chosen as a key exchange mechanism or cipher. What this
means, is that the symmetric encryption key or shared secret is transmitted securely to the other party
using asymmetric encryption to keep the shared secret secure in transit. Once the shared secret is
received, data can be sent quickly, and efficiently, and securely using a symmetric encryption cipher.
Clever?
One last topic to mention is somewhat related to asymmetric encryption and that's MACs or Message
Authentication Codes, not to be confused with media access control or MAC addresses. A MAC is a
bit of information that allows authentication of a received message, ensuring that the message came
from the alleged sender and not a third party masquerading as them. It also ensures that the message
wasn't modified in some way in order to provide data integrity. This sounds super similar to digital
signatures using public key cryptography, doesn't it? While very similar, it differs slightly since the
secret key that's used to generate the MAC is the same one that's used to verify it. In this sense, it's
similar to symmetric encryption system and the secret key must be agreed upon by all communicating
parties beforehand or shared in some secure way. This describes one popular and secure type of MAC
called HMAC or a Keyed-Hash Message Authentication Code. HMAC uses a cryptographic hash
function along with a secret key to generate a MAC. Any cryptographic hash functions can be used like
Shahwan or MD5 and the strength or security of the MAC is dependent upon the underlying security of
the cryptographic hash function used. The MAC is sent alongside the message that's being checked.
The Mac is verified by the receiver by performing the same operation on the received message, then
comparing the computed MAC with the one received with the message. If the MACs are the same, then
the message is authenticated. There are also MACs based on symmetric encryption ciphers, either
block or stream like DES or AES, which are called CMACs or Cipher-Based Message Authentication
Codes. The process is similar to HMAC, but instead of using a hashing function to produce a digest, a
symmetric cipher with a shared keys used to encrypt the message and the resulting output is used as the
MAC. A specific and popular example of a CMAC though slightly different is CBC-MAC or Cipher
Block Chaining Message Authentication Codes. CBC-MAC is a mechanism for building MACs
using block ciphers. This works by taking a message and encrypting it using a block cipher operating in
CBC mode. CBC mode is an operating mode for block ciphers that incorporates a previously encrypted
block cipher text into the next block's plain text. So, it builds a chain of encrypted blocks that require
the full, unmodified chain to decrypt. This chain of interdependently encrypted blocks means that any
modification to the plain text will result in a different final output at the end of the chain, ensuring
message integrity. In the next section, we'll check out some common examples of asymmetric
encryption algorithms and systems. I'll see you there.

26
Asymmetric Encryption Algorithms
So, one of the first practical asymmetric cryptography systems to be developed is RSA, name for the
initials of the three co-inventors. Ron Rivest, Adi Shamir and Leonard Adleman. This crypto system
was patented in 1983 and was released to the public domain by RSA Security in the year 2000.
The RSA system specifies mechanisms for generation and distribution of keys along with encryption
and decryption operation using these keys. We won't go into the details of the math involved, since it's
pretty high-level stuff and beyond the scope of this class. But, it's important to know that the key
generation process depends on choosing two unique, random, and usually very large prime numbers.
DSA or Digital Signature Algorithm is another example of an asymmetric encryption system, though
its used for signing and verifying data. It was patented in 1991 and is part of the US government's
Federal Information Processing Standard. Similar to RSA, the specification covers the key generation
process along with the signing and verifying data using the key pairs. It's important to call out that the
security of this system is dependent on choosing a random seed value that's incorporated into the
signing process. If this value was leaked or if it can be inferred if the prime number isn't truly random,
then it's possible for an attacker to recover the private key.
This actually happened in 2010 to Sony with their PlayStation 3 game console. It turns out they weren't
ensuring this randomized value was changed for every signature. This resulted in a hacker group called
failOverflow being able to recover the private key that Sony used to sign software for their platform.
This allowed moders to write and sign custom software that was allowed to run on the otherwise very
locked down console platform. This resulted in game piracy becoming a problem for Sony, as this
facilitated the illicit copying and distribution of games which caused significant losses in sales. I've
included links to more about this in the next reading, in case you want to dive deeper.
Earlier, we talked about how asymmetric systems are commonly used as key exchange mechanisms to
establish a shared secret that will be used with symmetric cipher. Another popular key exchange
algorithm is DH or Diffie-Hellman named for the co-inventors. Let's walk through how the DH key
exchange algorithm works. Let's assume we have two people who would like to communicate over an
unsecured channel, and let's call them Suzanne and Daryll. I've grown pretty fond of these two. First,
Suzanne and Daryl agree on the starting number that would be random and will be very large integer.
This number should be different for every session and doesn't need to be secret. Next, each person
chooses another randomized large number but this one is kept secret. Then, they combine their shared
number with their respective secret number and send the resulting mix to each other. Next, each person
combines their secret number with the combined value they received from the previous step. The result
is a new value that's the same on both sides without disclosing enough information to any potential
eavesdroppers to figure out the shared secret. This algorithm was designed solely for key exchange,
though there have been efforts to adapt it for encryption purposes. It's even been used as part of a PKI
system or Public Key Infrastructure system. We'll dive more into PKI systems later in this course.

27
Elliptic curve cryptography or ECC is a public key encryption system that uses the algebraic
structure of elliptic curves over finite fields to generate secure keys. What does that even mean? Well,
traditional public key systems, make use of factoring large prime numbers whereas ECC makes use of
elliptic curves. An elliptic curve is composed of a set of coordinates that fit in equation, similar to
something like y*2 = x*3 + ax + b.
Elliptic curves have a couple of interesting and unique properties. One is
horizontal symmetry, which means that at any point in the curve can be
mirrored along the x axis and still make up the same curve.On top of this,
any non-vertical line will intersect the curve in three places at most. Its this
last property that allows elliptic curves to be used in encryption. The benefit
of elliptic curve based encryption systems is that they are able to achieve
security similar to traditional public key systems with smaller key sizes. So,
for example, a 256 bit elliptic curve key, would be comparable to a 3,072 bit
RSA key. This is really beneficial since it reduces the amount of data needed to be stored and
transmitted when dealing with keys. Both Diffie-Hellman and DSA have elliptic curve variants,
referred to as ECDH and ECDSA, respectively. The US NEST recommends the use of EC encryption,
and the NSA allows its use to protect up the top secret data with 384 bit EC keys. But, the NSA has
expressed concern about EC encryption being potentially vulnerable to quantum computing attacks, as
quantum computing technology continues to evolve and mature. I'm going to buy Suzanne and Darryl
drink today for all their hard work. In the meantime, we've cooked up an assignment for you that will
test your encryption and decryption skills. Take your time to decode all the details, and I'll see you all
in the next lesson.

Hashing
So far, we've talked about two forms of encryption, symmetric and asymmetric.
In this next lesson, we're going to cover a special type of function that's widely used in computing and
especially within security, hashing. No, not the breakfast kind, although those are delicious. Hashing
or a hash function is a type of function or operation that takes in an arbitrary data input and maps it to
an output of a fixed size, called a hash or a digest.

The output size is usually specified in bits of data and is often included in the hashing function main.
What this means exactly is that you feed in any amount of data into a hash function and the resulting
output will always be the same size. But the output should be unique to the input, such that two
different inputs should never yield the same output. Hash functions have a large number of applications
in computing in general, typically used to uniquely identify data. You may have heard the term hash
table before in context of software engineering. This is a type of data structure that uses hashes to
accelerate data lookups. Hashing can also be used to identify duplicate data sets in databases or
archives to speed up searching of tables or to remove duplicate data to save space.

28
Depending on the application, there are various properties that may be desired, and a variety of hashing
functions exist for various applications.
We're primarily concerned with cryptographic hash functions which are used for various applications
like authentication, message integrity, fingerprinting, data corruption detection and digital signatures.
Cryptographic hashing is distinctly different from encryption because cryptographic hash functions
should be one directional. They're similar in that you can input plain text into the hash function and get
output that's unintelligible but you can't take the hash output and recover the plain text. The ideal
cryptographic hash function should be deterministic, meaning that the same input value should always
return the same hash value.
The function should be quick to compute and be efficient. It should be infeasible to reverse the function
and recover the plain text from the hash digest. A small change in the input should result in a change in
the output so that there is no correlation between the change in the input and the resulting change in the
output.
Finally, the function should not allow for hash collisions, meaning two different inputs mapping to the
same output. Cryptographic hash functions are very similar to symmetric key block ciphers in that they
operate on blocks of data. In fact, many popular hash functions are actually based on modified block
ciphers. Lets take a basic example to quickly demonstrate how a hash function works. We'll use an
imaginary hash function for demonstration purposes. Lets say we have an input string of "Hello World"
and we feed this into a hash function which generates the resulting hash of E49A00FF. Every time we
feed this string into our function, we get the same hash digest output. Now let's modify the input very
slightly so it becomes "hello world", all lower case now. While this change seems small to us, the
resulting hash output is wildly different, FF1832AE. Here is the same example but using a real hash
function, in this case md5sum.

Hopefully, the concept of hash functions make sense to you now. In the next section, we will explore
some examples of hashing algorithms and dive into weaknesses or attacks on hash functions.

29
Hashing Algorithms
In this section, we'll cover some of the more popular hashing functions, both currently and historically.
MD5 is a popular and widely used hash function designed in the early 1990s as a cryptographic
hashing function. It operates on a 512 bit blocks and generates 128 bit hash digests. While MD5 was
published in 1992, a design flaw was discovered in 1996, and cryptographers recommended using the
SHA-1 hash, a more secure alternative. But, this flaw was not deemed critical, so the hash function
continued to see widespread use and adoption. In 2004, it was discovered that MD5 is susceptible to
hash collisions, allowing for a bad actor to craft a malicious file that can generate the same MD5 digest
as another different legitimate file. Bad actors are the worst, aren't they? Shortly after this flaw was
discovered, security researchers were able to generate two different files that have matching MD5 hash
digests. In 2008, security researchers took this a step further and demonstrated the ability to create a
fake SSL certificate, that validated due to an MD5 hash collision. Due to these very serious
vulnerabilities in the hash function, it was recommended to stop using MD5 for cryptographic
applications by 2010. In 2012, this hash collision was used for nefarious purposes in the flame
malware, which used the forge Microsoft digital certificate to sign their malware, which resulted in the
malware appearing to be from legitimate software that came from Microsoft. You can learn more about
the flame malware in the next reading. When design flaws were discovered in MD5, it was
recommended to use SHA-1 as a replacement. SHA-1 is part of the secure hash algorithm suite of
functions, designed by the NSA and published in 1995. It operates a 512 bit blocks and generates 160
bit hash digest. SHA-1 is another widely used cryptographic hashing functions, used in popular
protocols like TLS/SSL, PGP SSH, and IPsec. SHA-1 is also used in version control systems like Git,
which uses hashes to identify revisions and ensure data integrity by detecting corruption or tampering.
SHA-1 and SHA-2 were required for use in some US government cases for protection of sensitive
information. Although, the US National Institute of Standards and Technology, recommended stopping
the use of SHA-1 and relying on SHA-2 in 2010. Many other organizations have also recommended
replacing SHA-1 with SHA-2 or SHA-3. And major browser vendors have announced intentions to
drop support for SSL certificates that use SHA-1 in 2017. SHA-1 also has its share of weaknesses and
vulnerabilities, with security researchers trying to demonstrate realistic hash collisions. During the
2000s, a bunch of theoretical attacks were formulated and some partial collisions were demonstrated,
but full collisions using these methods requires significant computing power. One such attack was
estimated to require $2.77 million in cloud computing CPU resources, Wowza. In 2015, a different
attack method was developed that didn't demonstrate a full collision but this was the first time that one
of these attacks was demonstrated which had major implications for the future security of SHA-1. What
was only theoretically possible before, was now becoming possible with more efficient attack methods
and increases in computing performance, especially in the space of GPU accelerated computations in
cloud resources. A full collision with this attack method was estimated to be feasible using CPU and
GPU cloud computing for approximately $75 to $120,000, much cheaper than previous attacks. You
can read more about these attacks and collisions in the next reading. In early 2017, the first full
collision of SHA-1 was published. Using significant CPU and GPU resources, two unique PDF files
were created that result in the same SHA-1 hash. The estimated processing power required to do this
was described as equivalent of 6,500 years of a single CPU, and 110 years of a single GPU computing
non-stop. That's a lot of years. There's also the concept of a MIC, or message integrity check. This
shouldn't be confused with a MAC or message authentication check, since how they work and what
they protect against is different. A MIC is essentially a hash digest of the message in question. You can
think of it as a check sum for the message, ensuring that the contents of the message weren't modified
in transit. But this is distinctly different from a MAC that we talked about earlier. It doesn't use secret

30
keys, which means the message isn't authenticated. There's nothing stopping an attacker from altering
the message, recomputing the checksum, and modifying the MIC attached to the message. You can
think of MICs as protecting against accidental corruption or loss, but not protecting against tampering
or malicious actions.
We've already alluded to attacks on hashes. Now let's learn more details, including how to defend
against these attacks. One crucial application for cryptographic hash functions is for authentication.
Think about when you log into your e-mail account. You enter your e-mail address and password. What
do you think happens in the background for the e-mail system to authenticate you? It has to verify that
the password you entered is the correct one for your account. You could just take the user supplied
password and look up the password on file for the given account and compare them. Right? If they're
the same, then the user is authenticated. Seems like a simple solution but does that seem secure to you?
In the authentication scenario, we'd have to store user passwords in plain text somewhere. That's a
terrible idea. You should never ever store sensitive information like passwords in plain text. Instead,
you should do what pretty much every authentication system does, store a hash of the password instead
of the password itself. When you log into your e-mail account the password you entered is run through
the hashing function and then the resulting hash digest is compared against the hash on file. If the
hashes match, then we know the password is correct, and you're authenticated.
Password shouldn't be stored in plain text because if your systems are compromised, passwords for
other accounts are ultimate prize for the attacker. If an attacker manages to gain access to your system
and can just copy the database of accounts and passwords, this would obviously be a bad situation. By
only storing password hashes, the worst the attacker would be able to recover would be password
hashes, which aren't really useful on their own. What if the attacker wanted to figure out what
passwords correspond to the hashes they stole? They would perform a brute force attack against the
password hash database. This is where the attacker just tries all possible input values until the resulting
hash matches the one they're trying to recover the plain text for. Once there's a match, we know that the
input that's generated that matches the hash is the corresponding password. As you can imagine, a brute
force attack can be very computationally intensive depending on the hashing function used.
An important characteristic to call out about brute force attacks is, technically, they're impossible to
protect against completely. A successful brute force attack against even the most secure system
imaginable is a function of attacker time and resources. If an attacker has unlimited time and or
resources any system can be brute forced. Yikes. The best we can do to protect against these attacks, is
to raise the bar. Make it sufficiently time and resource intensive so that it's not practically feasible in a
useful time-frame or with existing technology.
Another common method to help raise the computational bar and protect against brute force attacks is
to run the password through the hashing function multiple times, sometimes through thousands of
iterations. This would require significantly more computations for each password guess attempt. That
brings us to the topic of rainbow tables. Don't be fooled by the colorful name. These tables are used by
bad actors to help speed up the process of recovering passwords from stolen password hashes.

31
A rainbow table is just a pre-computed table of all possible
password values and their corresponding hashes. The idea
behind rainbow table attacks is to trade computational
power for disk space by pre-computing the hashes and
storing them in a table. An attacker can determine what the
corresponding password is for a given hash by just looking
up the hash in their rainbow table. This is unlike a brute
force attack where the hash is computed for each guess
attempt. It's possible to download rainbow tables from the
internet for popular password lists and hashing functions. This further reduces the need for
computational resources requiring large amounts of storage space to keep all the password and hash
data. You may be wondering how you can protect against these pre-computed rainbow tables. That's
where salts come into play. And no, I'm not talking about table salt.
A password salt is additional randomized data that's added into the hashing function to generate the
hash that's unique to the password and salt combination. Here's how it works.
A randomly chosen large salt is concatenated or tacked onto the end of the
password. The combination of salt and password is then run through the
hashing function to generate the hash which is then stored alongside the salt.
What this means now for an attacker is that they'd have to compute a rainbow
table for each possible salt value. If a large salt is used, the computational and
storage requirements to generate useful rainbow tables becomes almost
unfeasible. Early Unix systems used a 12 Bit salt, which amounts to a total of
4,096 possible salts. So, an attacker would have to generate hashes for every
password in their database, 4,096 times over. Modern systems like Linux,
BSD and Solaris use a 128 bit salt. That means there are two to the 128 power
possible salt values, which is over 340 undecillion. That's 340 with 36 zeros
following. Clearly, 128 bit salt raises the bar high enough that a rainbow table
attack wouldn't be possible in any realistic time-frame. Just another scenario when adding salt to
something makes it even better. That runs out our lesson on hashing functions. Up next we'll talk about
real world applications of cryptography and explain how it's used in various applications and protocols.
But first, a project that will help you get hands on with hashing.

32
Cryptography Applications
Public Key Infrastructure
In this lesson, we're going to cover PKI, or Public Key Infrastructure. Spoiler alert, this is a critical
piece to securing communications on the Internet today. Earlier we talked about Public Key
Cryptography and how it can be used to securely transmit data over an untrusted channel and verify the
identity of a sender using digital signatures.
PKI is a system that defines the creation, storage and distribution of digital certificates. A digital
certificate is a file that proves that an entity owns a certain public key.
A certificate contains information about the public key, the
entity it belongs to and a digital signature from another party
that has verified this information. If the signature is valid and
we trust the entity that signed the certificate, then we can trust
the public key to be used to securely communicate with the
entity that owns it.
The entity that's responsible for storing, issuing, and signing certificates is referred to as CA, or
Certificate Authority. It's a crucial component of the PKI system. There's also an RA, or Registration
Authority, that's responsible for verifying the identities of any entities requesting certificates to be
signed and stored with the CA. This role is usually lumped together with the CA. A central repository is
needed to securely store and index keys and a certificate management system of some sort makes
managing access to storage certificates and issuance of certificates easier. There are a few different
types of certificates that have different applications or uses.
The one you're probably most familiar with is SSL or TLS server certificate. This is a certificate that
a web server presents to a client as part of the initial secure setup of an SSL, TLS connection. Don't
worry, we'll cover SSL, TLS in more detail in a future lesson. The client usually a web browser will
then verify that the subject of the certificate matches the host name of the server the client is trying to
connect to. The client will also verify that the certificate is signed by a certificate authority that the
client trusts. It's possible for a certificate to be valid for multiple host names. In some cases, a wild card
certificate can be issued where the host name is replaced with an asterisk, denoting validity for all host
names within a domain. It's also possible for a server to use what's called a Self Sign Certificate. You
may have guessed from the name. This certificate has been signed by the same entity that issued the
certificate. This would basically be signing your own public key using your private key. Unless you
already trusted this key, this certificate would fail to verify.
Another certificate type is an SSL/TLS client certificate. This is an optional component of SSL, TLS
connections and is less commonly seen than server certificates. As the name implies, these are
certificates that are bound to clients and are used to authenticate the client to the server, allowing
access control to a SSL/TLS service. These are different from server certificates in that the client
certificates aren't issued by a public CA. Usually the service operator would have their own internal CA
which issues and manages client certificates for their service.

33
There are also code signing certificates which are used for signing executable programs. This allows
users of these signed applications to verify the signatures and ensure that the application was not
tampered with. It also lets them verify that the application came from the software author and is not a
malicious twin.
We've mentioned certificate authority trust, but not really explained it. So let's take some time to go
over how it all works. PKI is very much dependent on trust relationships between entities, and building
a network or chain of trust. This chain of trust has to start somewhere and that starts with the Root
Certificate Authority. These root certificates are self signed because they are the start of the chain of
trust. So there's no higher authority that can sign on their behalf. This Root Certificate Authority can
now use the self-signed certificate and the associated private key to begin signing other public keys and
issuing certificates. It builds a sort of tree structure with the root private key at the top of the structure.
If the root CA signs a certificate and sets a field in the certificate called CA to true, this marks a
certificate as an intermediary or subordinate CA. What this means is that the entity that this certificate
was issued to can now sign other certificates. and this CA has the same trust as the root CA. An
intermediary CA can also sign other intermediate CAs. You can see how this extension of trust from
one root CA to intermediaries can begin to build a chain.
A certificate that has no authority as a CA is referred to as an End Entity or Leaf Certificate. Similar
to a leaf on a tree, it's the end of the tree structure and can be considered the opposite of the roots. You
might be wondering how these root CAs wind up being trusted in the first place. Well, that's a very
good question. In order to bootstrap this chain of trust, you have to trust a root CA certificate, otherwise
the whole chain is untrusted. This is done by distributing root CA certificates via alternative channels.
Each major OS vendor ships a large number of trusted root CA certificates with their OS. And they
typically have their own programs to facilitate distribution of root CA certificates. Most browsers will
then utilize the OS provided store of root certificates. Let's do a deep dive into certificates beyond just
their function.
The X.509 standard is what defines the format of digital certificates. It also defines a certificate
revocation list or CRL which is a means to distribute a list of certificates that are no longer valid. The
X.509 standard was first issued in 1988 and the current modern version of the standard is version 3.
The fields defined in X.509 certificate are, the Version, what version of the X.509 standard certificate
adheres to.
Serial number, a unique identifier for their certificate assigned by the CA which allows the CA to
manage and identify individual certificates.
Certificate Signature Algorithm, this field indicates what public key algorithm is used for the public
key and what hashing algorithm is used to sign the certificate.
Issuer Name, this field contains information about the authority that signed the certificate.
Validity, this contains two subfields, Not Before and Not After, which define the dates when the
certificate is valid for.
Subject, this field contains identifying information about the entity the certificate was issued to.
Subject Public Key Info, these two subfields define the algorithm of the public key along with the
public key itself.
Certificate signature algorithm, same as the Subject Public Key Info field, these two fields must
match.

34
Certificate Signature Value, the digital signature data itself.
There are also certificate fingerprints which aren't actually fields in the certificate itself, but are
computed by clients when validating or inspecting certificates. These are just hash digests of the whole
certificate. You can read about the full X.509 standard in the next reading. Can be found at
https://www.ietf.org/rfc/rfc5280.txt
An alternative to the centralized PKI model of establishing trust and binding identities is what's called
the Web of Trust. A Web of Trust is where individuals instead of certificate authorities sign other
individuals' public keys. Before an individual signs a key, they should first verify the person's identity
through an agreed upon mechanism. Usually by checking some form of identification, driver's license,
passport, etc. Once they determine the person is who they claim to be, signing their public key is
basically vouching for this person. You're saying that you trust that this public key belongs to this
individual. This process would be reciprocal, meaning both parties would sign each other's keys.
Usually people who are interested in establishing web of trust will organize what are called Key
Signing Parties where participants performed the same verification and signing. At the end of the party
everyone's public key should have been signed by every other participant establishing a web of trust. In
the future when one of these participants in the initial key signing party establishes trust with a new
member, the web of trust extends to include this new member and other individuals they also trust. This
allows separate webs of trust to be bridged by individuals and allows the network of trust to grow.

Cryptography in Action
In this section, we'll dive into some real world applications of the encryption concepts that we've
covered so far. In the last section, we mentioned SSL/TLS when we were talking about digital
certificates. Now that we understand how digital certificates function and the crucial roles CAs play,
let's check out how that fits into securing web traffic via HTTPS.
You've probably heard of HTTPS before, but do you know exactly what it is and how it's different from
HTTP? Very simply, HTTPS is the secure version of HTTP, the Hypertext Transfer Protocol. So how
exactly does HTTPS protect us on the Internet? HTTPS can also be called HTTP over SSL or TLS
since it's essentially encapsulating the HTTP traffic over an encrypted, secured channel utilizing SSL or
TLS. You might hear SSL and TLS used interchangeably, but SSL 3.0, the latest revision of SSL, was
deprecated in 2015, and TLS 1.2 is the current recommended revision, with version 1.3 still in the
works.
Now, it's important to call out that TLS is actually independent of HTTPS, and is actually a generic
protocol to permit secure communications and authentication over a network. TLS is also used to
secure other communications aside from web browsing, like VoIP calls such as Skype or Hangouts,
email, instant messaging, and even Wi-Fi network security.
TLS grants us three things. One, a secure communication line, which means data being transmitted is
protected from potential eavesdroppers.
Two, the ability to authenticate both parties communicating, though typically, only the server is
authenticated by the client.
And three, the integrity of communications, meaning there are checks to ensure that messages aren't
lost or altered in transit.

35
TLS essentially provides a secure channel for an
application to communicate with a service, but
there must be a mechanism to establish this
channel initially. This is what's referred to as a TLS
handshake.
The handshake process kicks off with a client
establishing a connection with a TLS enabled
service, referred to in the protocol as ClientHello.
This includes information about the client, like the
version of the TLS that the client supports, a list of
cipher suites that it supports, and maybe some
additional TLS options.
The server then responds with a ServerHello message, in which it selects the highest protocol version
in common with the client, and chooses a cipher suite from the list to use. It also transmits its digital
certificate and a final ServerHelloDone message. The client will then validate the certificate that the
server sent over to ensure that it's trusted and it's for the appropriate host name.
Assuming the certificate checks out, the client then sends a ClientKeyExchange message. This is
when the client chooses a key exchange mechanism to securely establish a shared secret with the
server, which will be used with a symmetric encryption cipher to encrypt all further communications.
The client also sends a ChangeCipherSpec message indicating that it's switching to secure
communications now that it has all the information needed to begin communicating over the secure
channel. This is followed by an encrypted Finished message which also serves to verify that the
handshake completed successfully.
The server replies with a ChangeCipherSpec and an encrypted Finished message once the shared
secret is received. Once complete, application data can begin to flow over the now the secured channel.
The session key is the shared symmetric encryption key using TLS sessions to encrypt data being sent
back and forth. Since this key is derived from the public-private key, if the private key is compromised,
there's potential for an attacker to decode all previously transmitted messages that were encoded using
keys derived from this private key.
To defend against this, there's a concept of forward secrecy. This is a property of a cryptographic
system so that even in the event that the private key is compromised, the session keys are still safe. The
SSH, or secure shell, is a secure network protocol that uses encryption to allow access to a network
service over unsecured networks. Most commonly, you'll see SSH use for remote login to command
line base systems, but the protocol is super flexible and has provisions for allowing arbitrary networks
ports and traffic over those ports to be tunneled over the encrypted channel. It was originally designed
as a secure replacement for the Telnet protocol and other unsecured remote login shell protocols like
rlogin or r-exec. It's very important that remote login and shell protocols use encryption. Otherwise,
these services will be transmitting usernames and passwords, along with keystrokes and terminal output
in plain text. This opens up the possibility for an eavesdropper to intercept credentials and keystrokes,
not good. SSH uses public key cryptography to authenticate the remote machine that the client is
connecting to, and has provisions to allow user authentication via client certificates, if desired. The
SSH protocol is very flexible and modular, and supports a wide variety of different key exchange
mechanisms like Diffie-Hellman, along with a variety of symmetric encryption ciphers. It also supports
a variety of authentication methods, including custom ones that you can write.

36
When using public key authentication, a key pair is generated by the user who wants to authenticate.
They then must distribute those public keys to all systems that they want to authenticate to using the
key pair. When authenticating, SSH will ensure that the public key being presented matches the private
key, which should never leave the user's possession.
PGP stands for Pretty Good Privacy. How's that for a creative name? Well, PGP is an encryption
application that allows authentication of data along with privacy from third parties relying upon
asymmetric encryption to achieve this. It's most commonly used for encrypted email communication,
but it's also available as a full disk encryption solution or for encrypting arbitrary files, documents, or
folders. PGP was developed by Phil Zimmerman in 1991 and it was freely available for anyone to use.
The source code was even distributed along with the software. Zimmerman was an anti nuclear activist,
and political activism drove his development of the PGP encryption software to facilitate secure
communications for other activists. PGP took off once released and found its way around the world,
which wound up getting Zimmerman into hot water with the US federal government. At the time, US
federal export regulations classified encryption technology that used keys larger than 40 bits in length
as munitions. This meant that PGP was subject to similar restrictions as rockets, bombs, firearms, even
nuclear weapons. PGP was designed to use keys no smaller than 128-bit, so it ran up against these
export restrictions, and Zimmerman faced a federal investigation for the widespread distribution of his
cryptographic software. Zimmerman took a creative approach to challenging these restrictions by
publishing the source code in a hardcover printed book which was made available widely. The idea was
that the contents of the book should be protected by the first amendment of the US constitution. Pretty
clever? The investigation was eventually closed in 1996 without any charges being filed, and
Zimmerman didn't even need to go to court. You can read more about why he developed PGP in the
next reading. PGP is widely regarded as very secure, with no known mechanisms to break the
encryption via cryptographic or computational means. It's been compared to military grade encryption,
and there are numerous cases of police and government unable to recover data protected by PGP
encryption. In these cases, law enforcement tend to resort to legal measure to force the handover of
passwords or keys. Originally, PGP used the RSA algorithm, but that was eventually replaced with
DSA to avoid issues with licensing.

Securing Network Traffic


Let's talk about securing network traffic. As we've seen, encryption is used for protecting data both
from the privacy perspective and the data integrity perspective. A natural application of this technology
is to protect data in transit, but what if our application doesn't utilize encryption? Or what if we want to
provide remote access to internal resources too sensitive to expose directly to the Internet?
We use a VPN, or Virtual Private Network solution. A
VPN is a mechanism that allows you to remotely connect
a host or network to an internal private network, passing
the data over a public channel, like the Internet. You can
think of this as a sort of encrypted tunnel where all of our
remote system's network traffic would flow, transparently
channeling our packets via the tunnel through the remote
private network. A VPN can also be point-to-point, where
two gateways are connected via a VPN. Essentially
bridging two private networks through an encrypted
tunnel.

37
There are a bunch of VPN solutions using different approaches and protocols with differing benefits
and tradeoffs. Let's go over some of the more popular ones.
IPsec, or Internet Protocol Security, is a VPN protocol that was designed in conjunction with IPv6. It
was originally required to be standards compliant with IPv6 implementations, but was eventually
dropped as a requirement. It is optional for use with IPv6.
IPsec works by encrypting an IP packet and
encapsulating the encrypted packet inside an IPsec
packet. This encrypted packet then gets routed to
the VPN endpoint where the packet is de-
encapsulated and decrypted then sent to the final
destination.
IPsec supports two modes of operations, transport
mode and tunnel mode. When transport mode is used, only the payload of the IP packet is encrypted,
leaving the IP headers untouched. Heads up that authentication headers are also used. Header values are
hashed and verified, along with the transport and application layers. This would prevent the use of
anything that would modify these values, like NAT or PAT. In tunnel mode, the entire IP packet,
header, payload, and all, is encrypted and encapsulated inside a new IP packet with new headers.
While not a VPN solution itself, L2TP, or Layer 2 Tunneling Protocol, is typically used to support
VPNs. A common implementation of L2TP is in conjunction with IPsec when data confidentially is
needed, since L2TP doesn't provide encryption itself. It's a simple tunneling protocol that allows
encapsulation of different protocols or traffic over a network that may not support the type of traffic
being sent. L2TP can also just segregate and manage the traffic. ISPs will use the L2TP to deliver
network access to a customer's endpoint, for example. The combination of L2TP and IPsec is referred
to as L2TP IPsec and was officially standardized in ietf RFC 3193. The establishment of an L2TP IPsec
connection works by first negotiating an IPsec security association. Which negotiates the details of the
secure connection, including key exchange, if used. It can also share secrets, public keys, and a number
of other mechanisms. I've included a link to more info about it in the next reading. The combination of
L2TP and IPsec is referred to as L2TP/IPsec and was officially standardized in
https://tools.ietf.org/html/rfc3193. An example of this is OpenVPN, which uses the OpenSSL library to
handle key exchange and encryption of data along with control channels.
https://openvpn.net/community/
Next, secure communication is established using Encapsulating Security Payload. It's a part of the
IPsec suite of protocols, which encapsulates IP packets, providing confidentiality, integrity, and
authentication of the packets. Once secure encapsulation has been established, negotiation and
establishment of the L2TP tunnel can proceed. L2TP packets are now encapsulated by IPsec, protecting
information about the private internal network. An important distinction to make in this setup is the
difference between the tunnel and the secure channel. The tunnel is provided by L2TP, which permits
the passing of unmodified packets from one network to another. The secure channel, on the other
hand, is provided by IPsec, which provides confidentiality, integrity, and authentication of data being
passed.

38
SSL TLS is also used in some VPN implementations to secure network traffic, as opposed to individual
sessions or connections. An example of this is OpenVPN, which uses the OpenSSL library to handle
key exchange and encryption of data, along with control channels. This also enables OpenVPN to make
use of all the cyphers implemented by the OpenSSL library. Authentication methods supported are pre-
shared secrets, certificate-based, and username password. Certificate-based authentication would be the
most secure option, but it requires more support and management overhead since every client must
have a certificate. Username and password authentication can be used in conjunction with certificate
authentication, providing additional layers of security. It should be called out that OpenVPN doesn't
implement username and password authentication directly. It uses modules to plug into authentication
systems, which we'll cover in the next module. OpenVPN can operate over either TCP or UDP,
typically over port 1194. It supports pushing network configuration options from the server to a client
and it supports two interfaces for networking. It can either rely on a Layer 3 IP tunnel or a Layer 2
Ethernet tap. The Ethernet tap is more flexible, allowing it to carry a wider range of traffic. From the
security perspective, OpenVPN supports up to 256-bit encryption through the OpenSSL library. It also
runs in user space, limiting the seriousness of potential vulnerabilities that might be present. There are a
lot of acronyms to take in, so take a minute to go over them and read more about them, and I'll see you
in the next video.

Cryptographic Hardware
Another interesting application of cryptography concepts, is the Trusted Platform Module or TPM.
This is a hardware device that's typically integrated into the hardware of a computer,
that's a dedicated crypto processor. TPM offers secure generation of keys, random
number generation, remote attestation, and data binding and sealing. A TPM has
unique secret RSA key burned into the hardware at the time of manufacture, which
allows a TPM to perform things like hardware authentication. This can detect
unauthorized hardware changes to a system.
Remote attestation is the idea of a system authenticating its software and hardware configuration to a
remote system. This enables the remote system to determine the integrity of the remote system. This
can be done using a TPM by generating a secure hash of the system configuration, using the unique
RSA key embedded in the TPM itself.
Another use of this secret hardware backed encryption key is data binding and sealing. It involves
using the secret key to derive a unique key that's then used for encryption of data. Basically, this binds
encrypted data to the TPM and by extension, the system the TPM is installed in, sends only the keys
stored in hardware in the TPM will be able to decrypt the data. Data sealing is similar to binding since
data is encrypted using the hardware backed encryption key. But, in order for the data to be decrypted,
the TPM must be in a specified state.
TPM is a standard with several revisions that can be implemented as a discrete hardware chip,
integrated into another chip in a system, implemented in firmware software or virtualized in a
hypervisor. The most secure implementation is the discrete chip, since these chip packages also
incorporate physical tamper resistance to prevent physical attacks on the chip.

39
Mobile devices have something similar referred to as a secure element. Similar to a TPM, it's a tamper
resistant chip often embedded in the microprocessor or integrated into the mainboard of a mobile
device. It supplies secure storage of cryptographic keys and provides a secure environment for
applications. An evolution of secure elements is the Trusted Execution Environment or TEE which
takes the concept a bit further. It provides a full-blown isolated execution environment that runs
alongside the main OS. This provides isolation of the applications from the main OS and other
applications installed there. It also isolates secure processes from each other when running in the TEE.
TPMs have received criticism around trusting the manufacturer. Since the secret key is burned into the
hardware at the time of manufacture, the manufacturer would have access to this key at the time. It is
possible for the manufacturer to store the keys that could then be used to duplicate a TPM, that could
break the security the module is supposed to provide. There's been one report of a physical attack on a
TPM which allowed a security researcher to view and access the entire contents of a TPM. But this
attack required the use of an electron microscope and micron precision equipment for manipulating a
TPM circuitry. While the process was incredibly time intensive and required highly specialized
equipment, it proved that such an attack is possible despite the tamper protections in place. You can
read more about it just after this video. There’s been one report of a physical attack on a TPM which
allowed a security researcher to view and access the entire contents of a TPM, read more at
https://gcn.com/Articles/2010/02/02/Black-Hat-chip-crack-020210.aspx
TPMs are most commonly used to ensure platform integrity, preventing unauthorized changes to the
system either in software or hardware, and full disk encryption utilizing the TPM to protect the entire
contents of the disk.
Full Disk Encryption or FDE, as you might have guessed from the name, is the practice of encrypting
the entire drive in the system. Not just sensitive files in the system. This allows us to protect the entire
contents of the disk from data theft or tampering. Now, there are a bunch of options for implementing
FDE. Like the commercial product PGP, Bitlocker from Microsoft, which integrates very well with
TPMs, Filevault 2 from Apple, and the open source software dm-crypt, which provides encryption for
Linux systems.
An FDE configuration will have one partition or logical partition that holds the data to be encrypted.
Typically, the root volume, where the OS is installed. But, in order for the volume to be booted, it must
first be unlocked at boot time. Because the volume is encrypted, the BIOS can't access data on this
volume for boot purposes. This is why FDE configurations will have a small unencrypted boot partition
that contains elements like the kernel, bootloader and initRD.
At boot time, these elements are loaded which then
prompts the user to enter a passphrase to unlock the
disk and continue the boot process.
FDE can also incorporate the TPM, utilizing the TPM
encryption keys to protect the disk. And, it has
platform integrity to prevent unlocking of the disk if
the system configuration is changed. This protects
against attacks like hardware tampering, and disk theft
or cloning.

40
Before we wrap up this module on encryption, I wanted to touch base on the concept of random.
Earlier, when we covered the various encryption systems, one commonality kept coming up that these
systems rely on. Did you notice what it was? That's okay if you didn't.
It's the selection of random numbers. This is a very important concept in encryption because if your
number selection process isn't truly random, then there can be some kind of pattern that an adversary
can discover through close observation and analysis of encrypted messages over time. Something that
isn't truly random is referred to as pseudo-random. It's for this reason that operating systems maintain
what's referred to as an entropy pool. This is essentially a source of random data to help seed random
number generators. There's also dedicated random number generators and pseudo-random number
generators, that can be incorporated into a security appliance or server to ensure that truly random
numbers are chosen when generating cryptographic keys.
I hope you found these topics in cryptography interesting and informative. I know I did when I first
learned about them.
In the next module, we'll cover the three As of security, authentication, authorization and accounting.
These three As are awesome and I'll tell you why in the next module. But before we get there, one final
quiz on the cryptographic concept we've covered so far.

Chapter 2 Test Questions and Answers


Question 1: Plaintext is the original message, while _____ is the encrypted message.
Ciphertext
Yep! Once the original message is encrypted, the result is referred to as ciphertext.

Question 2: The specific function of converting plaintext into ciphertext is called a(n) ______.
Encryption algorithm
Nice job! An encryption algorithm is the specific function or steps taken to convert plaintext into
encrypted ciphertext.

Question 3:Studying how often letters and pairs of letters occur in a language is referred to as ______.
Frequency analysis
Great work! Frequency analysis involves studying how often letters occur, and looking for similarities
in ciphertext to uncover possible plaintext mappings.

41
Question 4:True or false: The same plaintext encrypted using the same algorithm and same encryption
key would result in different ciphertext outputs.
FALSE
Wohoo! If the plaintext, algorithm, and key are all the same, the resulting ciphertext would also be the
same.
Question 5: The practice of hiding messages instead of encoding them is referred to as ______.
Steganography
That's right! Steganography involves hiding messages from discovery instead of encoding them.

Question 6: ROT13 and a Caesar cipher are examples of _______.


Substitution ciphers
Awesome! These are both examples of substitution ciphers, since they substitute letters for other letters
in the alphabet.

Question 7: DES, RC4, and AES are examples of ______ encryption algorithms.
Symmetric
Exactly! DES, RC4, and AES are all symmetric encryption algorithms.

Question 8:What are the two components of an asymmetric encryption system, necessary for
encryption and decryption operations? Check all that apply.
Private key
Correct. You got it! In asymmetric encryption systems, there's a private key used for encryption, and a
public key used for decryption.
Public key
You got it! In asymmetric encryption systems, there's a private key used for encryption, and a public
key used for decryption.

Question 9:To create a public key signature, you would use the ______ key.
Private
Correct. Nice work! The private key is used to sign data. This allows a third party to verify the
signature using the public key, ensuring that the signature came from someone in possession of the
private key.

42
Question 10:Using an asymmetric cryptosystem provides which of the following benefits? Check all
that apply.
Non-repudiation
Correct. That's exactly right! Confidentiality is provided by the encryption, authenticity is achieved
through the use of digital signatures, and non-repudiation is also provided by digitally signing data.
Authenticity
Correct. That's exactly right! Confidentiality is provided by the encryption, authenticity is achieved
through the use of digital signatures, and non-repudiation is also provided by digitally signing data.
Confidentiality
Correct. That's exactly right! Confidentiality is provided by the encryption, authenticity is achieved
through the use of digital signatures, and non-repudiation is also provided by digitally signing data.

Question 11:If two different files result in the same hash, this is referred to as a ________.
Hash collision
Correct! A hash collision is when two different inputs yield the same hash.

Question 12:When authenticating a user's password, the password supplied by the user is authenticated
by comparing the ____ of the password with the one stored on the system.
Hash
Correct. Yep! Passwords are verified by hashing and comparing hashes. This is to avoid storing
plaintext passwords.

Question 13:If a rainbow table is used instead of brute-forcing hashes, what is the resource trade-off?
Rainbow tables use less computational resources and more storage space
Correct. Wohoo! Instead of computing every hash, a rainbow table is a precomputed table of hashes
and text. Using a rainbow table to lookup a hash requires a lot less computing power, but a lot more
storage space.

Question 14:In a PKI system, what entity is responsible for issuing, storing, and signing certificates?
Certificate Authority
Correct. Excellent job! The certificate authority is the entity that signs, issues, and stores certificates.

43
Hands on Lab: Create/Inspect Key Pair, Encrypt/Decrypt and
Sign/Verify using Openssl
In this lab, you'll learn how to generate RSA private and public key pairs using the OpenSSL utility.
OpenSSL is a commercial-grade utility toolkit for Transport Layer Security (TLS) and Secure Sockets
Layer (SSL) protocols. It's also a general-purpose cryptography library. OpenSSL is licensed under an
Apache-style license, which means that you're free to get it and use it for commercial and non-
commercial purposes (subject to some simple license conditions).
What you'll do:
• OpenSSL: You'll explore what generating key pairs looks like using OpenSSL by using SSH to
access the Linux instance.
• Encrypt and decrypt: You'll use the key pair to encrypt and decrypt some small amount of
data.
• Verify: You'll use the key pair to sign and verify data to ensure its accuracy.
Generating keys
Before you can encrypt or decrypt anything, you need a private and a public key, so let's generate those
first!
Generating a private key
Remember, a key pair consists of a public key that you can make publicly available, and a private key
that you need to keep secret. Shhhh. When someone wants to send you data and make sure that no one
else can view it, they can encrypt it with your public key. Data that's encrypted with your public key
can only be decrypted with your private key, to ensure that only you can view the original data. This is
why it's important to keep private keys a secret! If someone else had a copy of your private key, they'd
be able to decrypt data that's meant for you. Not good!
First, let's generate a 2048-bit RSA private key, and take a look at it. To generate the key, enter this
command into the terminal:
openssl genrsa -out private_key.pem 2048
You should see the following output (or something very similar):

This command creates a 2048-bit RSA key, called "private_key.pem". The name of the key is specified
after the "-out" flag, and typically ends in ".pem". The number of bits is specified with the last
argument. To view your new private key, use "cat" to print it to the screen, just like any other file:
cat private_key.pem

44
The contents of the private key file should look like a large jumble of random characters. This is
actually correct, so don't worry about being able to read it:
Head's up: Your private key will look similar to this, but it won't
be the same. This is super important, because if openssl was
generating the same keys over and over, we'd be in serious trouble!

Generating a public key


Now, let's generate the public key from the private key, and inspect
that one, too. Now that you have a private key, you need to generate
a public key that goes along with it. You can give that to anyone
who wants to send you encrypted data. When data is hashed using
your public key, nobody will be able to decrypt it unless they have
your private key. To create a public key based on a private key,
enter the command below. You should see the following output:
openssl rsa -in private_key.pem -outform PEM -pubout -out publick_key.pem (followed by the output
“Writing RSA key”)
You can view the public key in the same way that you viewed the private key. It should look like a
bunch of random characters, like the private key, but different and slightly shorter:
cat publick_key.pem
Head's up: Like your
private key, your public key
will look different than the
one in this image.

Now that both of your keys


have been created, and you
can start using them to encrypt and decrypt data. Let's dive in!
Encrypting and decrypting
You'll simulate someone encrypting a file using your public key and sending it to you, which allows
you (and only you!) to decrypt it using your private key. Similarly, you can encrypt files using other
people's public keys, knowing that only they will be able to decrypt them.
You'll create a text file that contains some information you want to protect by encrypting it. Then, you'll
encrypt and inspect it. To create the file, enter the command below. It will create a new text file called
"secret.txt" which just contains the text, "This is a secret message, for authorized parties only". Feel
free to change this message to anything you'd like.
echo 'This is a secret message, for authorized parties only' > secret.txt

45
Then, to encrypt the file using your public key, enter this command:
openssl rsautl -encrypt -pubin -inkey public_key.pem -in secret.txt -out secret.enc
This creates the file "secret.enc", which is an encrypted version of "secret.txt". Notice that if you try to
view the contents of the encrypted file, the output is garbled. This is totally normal for encrypted
messages because they're not meant to have their contents displayed visually.
Here's an example of what displaying the encrypted file may look like:

The encrypted file will now be ready to send to whoever holds the matching private key. Since that's
you, you can decrypt it and get the original contents back. Remember that we must use the private key
to decrypt the message, since it was encrypted using the public key. Go ahead and decrypt the file,
using this command:
openssl rsautl -decrypt -inkey private_key.pem -in secret.enc
This will print the contents of the decrypted file to the screen, which should match the contents of
"secret.txt":

Creating a hash digest


Now, you'll create a hash digest of the message, then create a digital signature of this digest. Once that's
done, you'll verify the signature of the digest. This allows you to ensure that your message wasn't
modified or forged. If the message was modified, the hash would be different from the signed one, and
the verification would fail.
To create a hash digest of the message, enter this command:
openssl dgst -sha256 -sign private_key.pem -out secret.txt.sha256 secret.txt
This creates a file called "secret.txt.sha256" using your private key, which contains the hash digest of
your secret text file.
With this file, anyone can use your public key and the hash digest to verify that the file hasn't been
modified since you created and hashed it. To perform this verification, enter this command:
openssl dgst -sha256 -verify public_key.pem -signature secret.txt.sha256 secret.txt
This should show the following output, indicating that the verification was successful and the file hasn't
been modified by a malicious third party:

If any other output was shown, it would indicate that the contents of the file had been changed, and it's
likely no longer safe.
Conclusion
Wohoo! You've successfully used openssl to create both a public and a private key. You used them to
practice file encryption and decryption, and to create and verify digital hashes.

46
Hands on with Hashing
Introduction
In this lab, you'll have hands-on practice demonstrating hashing and hash verification using md5sum
and shasum tools.
Md5sum is a hashing program that calculates and verifies 128-bit MD5 hashes. As with all hashing
algorithms, theoretically, there's an unlimited number of files that will have any given MD5 hash.
Md5sum is used to verify the integrity of files.
Similarly, shasum is an encryption program that calculates and verifies SHA hashes. It's also commonly
used to verify the integrity of files.
In this lab, you'll see that almost any change to a file will cause its MD5 hash or SHA hashes to change.
What you'll do

• Compute: You'll create a text file and generate hashes using the md5sum and shasum tools.
• Inspect: After you generate the hash digests, you'll inspect the resulting files.
• Verify: You'll verify the hash using the md5sum and shasum tools.
• Modify: You'll modify the text file and compare these results to the original hash to observe
how the digest changes and how the hash verification process fails.
MD5
Let's kick things off by creating a text file containing some data. Feel free to substitute your own text
data, if you want. This command creates a text file called "file.txt" with a single line of basic text in it:
echo 'This is some text in a file, just so we have some data' > file.txt
You'll now generate the MD5 sum for the file and store it. To generate the sum for your new file, enter
this md5sum command:
md5sum file.txt > file.txt.md5
This creates the MD5 hash, and saves it to a new file. You can take a look at the hash by printing its
contents to the screen, using this command:
cat file.txt.md5
This should print the hash to the terminal, which should look something like this:

More importantly, you can also verify that the hash is correct, and that the original file hasn't been
tampered with since the sum was made. To do this, enter this command and see the following output,
which indicates that the hash is valid:
md5sum -c file.txt.md5

47
Verifying an invalid file
Next, we'll demonstrate the security of this process by showing how even a single-character change to
the file results in a different hash. First, you'll create a copy of the text file, and insert a single space at
the end of the file. Feel free to use any text-editor that you'd like. Head's up that we've included
instructions for making this change in Nano. To make a copy of the file, enter this command:
cp file.txt badfile.txt
Then generate a new md5sum for the new file:
md5sum badfile.txt > badfile.txt.md5
Note that the resulting hash is identical to the hash for our original file.txt despite the filenames being
different. This is because hashing only looks at the data, not the metadata of the file.
cat badfile.txt.md5 (enter) cat file.txt.md5
To open the text file in Nano, enter this command:
nano badfile.txt
This will open the file in the text editor. To add a space to the end of the file, use the arrow keys (not
the mouse!) to move the cursor to the end of the line of text. Then, press the spacebar to add a space
character to the end of the file. Your screen should look like this image:

To save the file, press ctrl+X. Confirm by typing "Y" for "yes", then press "Enter" to confirm.
This will take you back to the normal terminal screen. Now that you've made a very minor change to
the file, try verifying the hash again. It should fail verification this time, showing that any change at all
will result in a different hash. Try to verify it by entering this command again:
md5sum -c badfile.txt.md5
You should see a message that shows that the verification wasn't successful:

To see how different the hash of the edited file is, generate a new hash and inspect it:
md5sum badfile.txt > new.badfile.txt.md5 (enter) cat new.badfile.txt.md5
Check out how it's different from our previously generated hash:

48
For reference, here are the contents of the original sum:

SHA
Let's do the same steps, but for SHA1 and SHA256 hashes using the shasum tool. Functionally, the two
work in very similar ways, and their purpose is the same. But SHA1 and SHA256 offer stronger
security than MD5, and SHA256 is more secure than SHA1. This means that it's easier for a malicious
third party to attack a system using MD5 than one using SHA1. And because SHA256 is the strongest
of the three, it's currently widely used.
SHA1
To create the SHA1 sum and save it to a file, use this command:
shasum file.txt > file.txt.sha1
View it by printing it to the screen, like you've done before:
cat file.txt.sha1

Now, verify the hash using the command below. (Like before, this would fail if the original file had
been changed.) shasum -c file.txt.sha1
You should see the following output, indicating that the verification was a success:

SHA256
The same tool can be used to create a SHA256 sum. The "-a" flag specifies the algorithm to use, and
defaults to SHA1 if nothing is specified. To generate the SHA256 sum of the file, use this command:
shasum -a 256 file.txt > file.txt.sha256 You can output the contents of this file, the same as before:
cat file.txt.sha256 SHA256's increased security comes from it creating a longer hash that's harder to
guess. You can see that the contents of the file here are much longer than the SHA1 file:

Finally, to verify the SHA256 sum, you can use the same command as before:shasum -c file.txt.sha256
Conclusion
Congrats! You've successfully created and verified hashes using three of the most common hashing
algorithms: MD5, SHA1, and SHA256. You've also demonstrated the security of these hashes by
showing that even very small changes to the file result in a new hash and cause the verification to fail.

49
Chapter 3: AAA Security (Not Roadside Assistance)
In the third week of this course, we'll learn about the "three A's" in cybersecurity. No matter what type
of tech role you're in, it's important to understand how authentication, authorization, and accounting
work within an organization. By the end of this module, you'll be able to choose the most appropriate
method of authentication, authorization, and level of access granted for users in an organization.
Learning Objectives
• Identify and describe the most common authentication services.
• Understand and be able to choose the most appropriate method of authentication or
authorization.
• Be able to grant the appropriate level of access for the users of an organization.

50
Authentication
Authentication Best Practices
In this module, we'll cover the three A's of security, which are authentication, authorization, and
accounting. We'll cover exactly what they are, how they relate to each other, and their common
implementations and protocols.
Let's kick things off with authentication, and by extension identification. You should be familiar with
authentication in the form of username and password prompts when accessing things like your email.
So let's take that as an example to show the differences between identification and authentication.
Identification is the idea of describing an entity uniquely. For example, your email address is your
identity when logging into your email. But how do you go about proving you are who you claim to be?
That's the process we call authentication. When accessing your email, you're claiming to be your
email address, and you'd supply a password associated with the identity to prove it's you, or at least you
know the password associated with the e-mail account. Pretty straightforward, right?
This is distinctly different from authorization, which pertains to the resources an identity has access
to. These two concepts are usually distinguished from each other in the security world, with the terms
“authn” for authentication and “authz” for authorization. In our e-mail account login example, by
authenticating using your email address and password, your identity is authorized to access your email
inbox, but you're not authorized to access anyone else’s inbox.
We really don't want anyone else getting access to our inbox, right? So what can we do to ensure that
only we are able to identify and authenticate as our e-mail account? We could start by ensuring that
we're using a strong password. But what exactly constitutes a strong password? Well, what do you
think of the password ponies? Would you categorize that as a strong password? I hope not. That
password is super short, at only six characters, and all of those characters are lowercase letters. This is a
short and simple password, but that could be easily broken through brute force or dictionary based
attacks. Ponies would almost definitely be in a dictionary file, and six characters doesn't provide a large
pool of possibilities for an attacker to try. We can ensure a password is strong by making it longer and
more complex, adding numbers, uppercase letters, and special characters like punctuation. What do you
think of the strength of this “F$dr*!20Bq” password? That seems way more secure, doesn't it? It adds
complexity, which increases the pool of possible passwords, and is longer at 10 characters. But which
of these two passwords do you think you would be able to remember tomorrow? Probably not the
strong one, right?
This highlights a super important concept in security. There's often a trade-off between security and
usability. With our password example, the more usable password that's easy to memorize is less secure,
while the more secure password is much more challenging to remember. This concept applies to many
other security subjects, not just passwords. You can think of security as risk mitigation, and when it
comes to risk mitigation, it's impossible to completely eliminate the risk. The best you can do is
understand the risks your systems face, take measures to reduce those risks, and monitor them. Think
about it like this, the most secure computer system is one that's disconnected from everything,
including networking, and even power and is locked in a concrete bunker hundreds of feet underground
that no one has access to. While this is incredibly secure machine, almost impossible to compromise,
it's basically useless since it's powered off and no one can access it. This is an extreme example of the
security versus usability trade-off, but you get the point.

51
Coming back to our password example, we obviously need to find some sort of happy medium, but
where we have a reasonably secure password that's also somewhat easy to memorize. How about
something like this “ILik3P0n1ez99””. We start with a short phrase, Ilikeponies, then replace some
letters with numbers that resemble the letters to help with memorization. We also swapped the S with
the Z since they're similar sounding and tacked on some numbers as a suffix. At first glance, this seems
like a very complex password and will be hard to memorize, but it's easier than a random password
example. Problem solved, right? Well, you should actually be wary of this number substitution process,
since it's well-known by attackers and password cracking tools.
As an I.T. support specialist, ensuring that your organization uses strong passwords and practices, good
password hygiene are super important. They're literally the keys to the kingdom. So what should we
do?
Incorporating good password policies into an organization is key to ensuring that employees are
securing their accounts with strong passwords. A good password policy system would enforce length
requirements, character complexity, and check for the presence of dictionary words, which would
undermine the strength of passwords. Passwords should never be written down or recorded in plain
text, re-used across accounts or be shared with anyone. Password reuse is a risk because in the event
the password for one account is compromised, other accounts using the same password would also be
at risk. Sharing passwords should also be a no go, since this undermines the identity of an account
because someone else now has the ability to log in as that user. Along with requiring the use of strong
passwords, a password rotation policy is also recommended, since this safeguards against potential
undetected compromised passwords. But it's important that a password rotation period isn't too short.
Why? The inconvenience of having to change passwords so often may actually encourage poor security
behaviour by users. So let's say you required your organization to create highly complex passwords and
to change them every three months. It's very likely that a significant percentage of users would write
down their passwords on post-it notes or on their phones. A big no no. Despite the policy being
designed to increase security, it actually has the opposite effect because of the inconvenience it causes
your users.

Multifactor Authentication
In the last video, we learned about basic authentication in the form of username/password, sometimes
referred to as single-factor authentication. But there are other more complex and secure authentication
mechanisms. Keep in mind the security versus usability tradeoff, as we work through the different
types of multifactor authentication.
Multifactor authentication is a system where users are authenticated by presenting multiple pieces of
information or objects. The many factors that comprise a multifactor authentication system can be
categorized into three types. Something you know, something you have, and something you are.
Ideally, a multifactor system will incorporate at least two of these factors. Something you know would
be something like a password, or a pin for your bank or ATM card. Something you have would be a
physical token, like your ATM or bank card. Something you are would be a piece of biometric data,
like a fingerprint or iris scan.

52
The premise behind multifactor authentication is that an attacker would find it much more difficult to
steal or clone multiple factors of authentication, assuming different types are used. If multiple
passwords are used, security isn't enhanced by that much. This is because passwords, however many,
are still susceptible to phishing or keylogging attacks. By using a password in conjunction with a
security token is a game changer. Even if the password is compromised by a phishing attack, the
attacker would also need to steal or clone the physical token to be able to access the account and that's
much less likely to happen. We won't cover passwords again here since we talked about them in detail
in the last section. But here's the quick rundown.
Physical tokens can take a few different forms. Common ones include a USB device with a secret
token on it, a standalone device which generates a token, or even a simple key used with a traditional
lock. A physical token that's commonly used generates a short-lived token. Typically a number that's
entered along with a username and password. This number is commonly called a One-Time-Password
or OTP since it's short-lived and constantly changing value. An example of this is the RSA SecurID
token. It's a small, battery-powered device with an LCD display, that shows a One-Time-Password
that's rotated periodically. This is a time-based token sometimes called a TOTP, and operates by
having a secret seed or randomly generated value on the token that's registered with the authentication
server. The seed value is used in conjunction with the current time to generate a One-Time-Password.
Now, as long as the user has possession of their token, or can view the display of the token, they are
able to log in. I should also call out that the scheme requires the time between the authenticator token,
and the authentication server to be relatively synchronized. This is usually achieved by using the
Network Time Protocol or NTP. An attacker would need to either steal the physical token or clone the
token if they're able to steal the secret seed value. Since a time-based token is synchronized with the
server using time, which is not a secret, that would be sufficient for an attacker to clone a token.
There are also counter-based tokens, which use a secret seed value along with the secret counter value
that's incremented every time a one-time password is generated on the device. The value is then
incremented on the server upon successful authentication.
This is more secure than the time-based tokens for two
reasons. First, the attacker would need to recover the seed
value and the counter value. Second, the counter value is
also incrementing when it's being used. So, a cloned token
would only be useful for a short period of time before the
counter value changes too much and the clone token
becomes un-synchronized from the real token and the
server. These token generators can either be physical,
dedicated devices, or they can be an app installed on a
smartphone that performs the same functionality.
Another very common method for handling multifactor today, is that the delivery of one-time password
tokens using SMS but this has been subject to some criticism, because of the observed attacks through
this channel. The problem with relying on SMS to transmit an additional authentication factor is that
you're dependent on the security processes of the mobile carrier. SMS isn't encrypted, nor is it private.
and it's possible for SMS to be intercepted by a well-funded attacker. Even worse, there have been
accounts of SMS based multifactor codes being stolen by calling the mobile provider. The attacker
impersonates the owner of the line of service to redirect phone calls and SMS to a phone the attacker
controls. If the attacker has already compromised the password and can get SMS redirected to them,
they now get full access to the account.

53
Of course, there's a convenience tradeoff when you use a physical token. You have to carry around
another device in order to authenticate. If the device is lost or damaged, the user won't be able to
authenticate until the device is replaced. This also requires support overhead, since devices will fail, be
lost, run out of batteries, and get out of sync with the server. Using an app on a smartphone addresses
some of these issues, but still, require some additional support and inconvenience. When prompted to
log in, the user must retrieve a device or phone from their pocket and manually transcribe the numbers
into the authentication page. These generated one-time passwords are also susceptible to man in the
middle style phishing attacks. A user can be tricked into going to a fake authentication page by sending
a phishing email. Something on the lines of, "your account has been compromised, please log in and
change your password immediately." When the victim enters their credentials in the fake page,
including the one-time password, the attacker has all the information needed to take over the account.
The other category of multifactor authentication is biometrics, which has gained in popularity in recent
years, especially in mobile devices. Biometric authentication is the process of using unique
physiological characteristics of an individual to identify them. By confirming the biometric signature,
the individual is authenticated. A very common use of this in mobile devices is fingerprint scanners to
unlock phones. This works by registering your fingerprints first, using an optical sensor that captures
images of the unique pattern of your fingerprint. Much like how passwords should never be stored in
plain text, biometric data used for authentication, so, it also never be stored directly. This is even more
important for handling biometric data. Unlike passwords, biometrics are an inherent part of who
someone is. So, there are privacy implications to theft or leaks of biometric data. Biometric
characteristics can also be super difficult to change in the event that they are compromised unlike
passwords. So, instead of storing the fingerprint data directly, the data is run through a hashing
algorithm and the resulting unique hash is stored. One advantage of biometric authentication over
knowledge or token-based systems, is that it's more reliable to identify an individual for authentication,
since biometric features aren't usually shareable. For example, you can't give your friend your
fingerprints so that they can log in as you. Well, you'd hope not anyway. But as schools start to
introduce fingerprint based attendance recording systems, students are finding ways to trick the system.
They're creating fake fingerprints using things like glue, allowing friends to marking each other as
present if they're late or if they skip school. This is harder to achieve than sharing a password, but it's
sort of ingenious of these kids to think up. They really go the extra mile to skip school these days. Not
that I'm condoning this behaviour, but you can read more about it at
http://www.planetbiometrics.com/article-details/i/5774/desc/indian-pupils-cheat-biometric-system-
with-glue/. Other biometric systems use features like iris scans, facial recognition, gate detection and
even voice. Microsoft developed the biometric authentication system for Windows 10, called Windows
Hello, which supports fingerprint identification, iris identification and facial recognition. It uses two
cameras, one for color and one for infrared, which allows for depth detection. This way, it's not
possible to trick the system using a printout of an authorized user's face.
An evolution of physical tokens is the U2F or Universal Second Factor. It's a standard developed
jointly by Google, Yubico and NXP Semiconductors. The finalized standard for U2F are being hosted
by the FIDO alliance. U2F incorporates a challenge-response mechanism, along with public key
cryptography to implement a more secure and more convenient second-factor authentication solution.
U2F tokens are referred to as security keys and are available from a range of manufacturers. Support
for U2F authentication is built into the Chrome browser and the Opera browser, with native Firefox
support coming soon. Security keys are essentially small embedded cryptoprocessors, that have secure
storage of asymmetric keys and additional slots to run embedded code. Let's do a quick rundown on
how exactly security keys work, and how their an improvement over an OTP solution.

54
The first step is registration, since the security key must be registered with a site or service. At
registration time, the security key generates a private-public key pair unique to that site, and submits
the public key to the site for registration. It also binds the identity of the site with the key pair. The
reason for unique key pairs for each site is for privacy reasons. If a site is compromised, this prevents
cross-referencing registered public keys, and discovering commonalities between sites based on
registration data. Once registered with the site, the next time you're prompted to authenticate, you'll be
prompted for your username and password, as usual, but afterwards, you'll be prompted to tap your
security key. When you physically tap the security key, it's a small check for user presence to ensure
malware can’t authenticate on your behalf, without your knowledge. This tap will unlock the private
keys stored in the security key, which is used to authenticate.
The authentication happens as a challenge-response process, which protects against replay attacks. This
is because the authentication session can't be used again later by an eavesdropper, because the
challenge and resulting response will be different with every authentication session. What happens is
the site generates a challenge, essentially, some randomized data and sends this to the client that's
attempting to authenticate. The client will then select the private key matching the site, and use this key
to sign the challenge and send the signed data back. The site can now verify the signature using the
public key that was registered earlier. If the signature checks out, the user is authenticated. From a
security perspective, this is a much more secure design than OTPs. This is because, the authentication
flow is protected from phishing attacks, given the interactive nature of the process.
While U2F doesn't directly protect against man in the middle attacks, the authentication should take
place over a secure TLS connection, which would provide protection from this type of attack. Security
keys are also resistant to cloning or forgery, because they have unique, embedded secrets on them and
are protected from tampering. From the convenience perspective, this is a much nicer authentication
flow compared to OTPs since the user doesn't have to manually transcribe a string of numbers into the
authentication dialog. All they have to do is tap their security key. Nice and easy.
As an IT support specialist, you may come across multifactor authentication setups, that you'll be
responsible for supporting. You might even be tasked with helping to implement one. So, it's important
to understand how they provide enhanced account protection, along with the options that are available.

55
Certificates
In the last section, we mentioned client certificates as a form of authentication.
As we learned earlier, certificates are public keys that are signed by a certificate authority or CA as a
sign of trust.
We covered TLS server certificates, but there can also be client certificates. These operate very
similarly to server certificates but are presented by clients and allow servers to authenticate and verify
clients.
As an IT support specialist, it's important for you to understand client certificates and certificate-based
authentication, since you might encounter these in the course of your career. It's not uncommon for
VPN systems or enterprise Wi-Fi setups to use client certificates for authentication. So understanding
how they work will help you troubleshoot issues.
In order to issue client certificates, an organization must set up and maintain CA infrastructure to issue
and sign certificates. Part of certificate authentication also involves the client authenticating the server,
giving us mutual authentication. This is a positive since the client can verify that it's talking to the real
authentication server and not an impersonator. In this case, all clients that are using certificate
authentication would also need to have the CA certificate in their certificate trust store. This establishes
trust with the CA and allows the client to verify it's talking to the real server when trying to
authenticate.
Certificate authentication is like presenting identification at the airport. You show your ID or your
certificate to prove who you are. The ID is checked to see if it was issued by an authority that is trusted
by the verifier. Was it issued by a government entity or is it a novelty license from a gift shop?
Obviously, one of those IDs would be accepted at the airport, similar to a certificate being signed by a
trusted CA. When you're at the airport, the expiration date on your ID will also be checked to ensure it's
still valid. The same thing applies to certificate authentication, although the certificates have two dates
that need to be verified.
“Not valid before”, and “not valid after”. Not valid before is checking to see if the certificate is valid
yet since it's possible to have certificates issued for future use. Not valid after is a straightforward
expiration date, after which the certificate is no longer valid.
Airport authorities also have a list of specific IDs that are flagged. If your ID is on that list, then you'll
be rejected for air travel. Similarly, the certificate will be checked against a certificate revocation list
or a CRL. This is a signed list published by the CA which defines certificates that have been explicitly
revoked.
One last step that's performed as part of the authentication server verification process is to prove
possession of the corresponding private key, since the certificate is a signed public key. If we don't
prove possession, there's nothing stopping an attacker from copying the certificate, since it's not
considered secret, and pretending to be the owner. To avoid this, possession of the private key is
verified through a challenge response mechanism. This is where the server requests a randomized bit of
data to be signed using the private key corresponding to the public key presented for authentication.
This is similar to how the airport checks the photo on your ID to make sure you look like the person in
the photo and aren't impersonating them.

56
LDAP
LDAP, or Lightweight Directory Access Protocol, is an open industry-standard protocol for
accessing and maintaining directory services.
When we say directory services, we're referring to something similar to a phone or email directory. It's
most commonly used as a backend for authentication of accounts. The LDAP specification describes
the data structure of the directory itself and defines functions for interacting with the service, like
performing look ups and modifying data.
You can think of a directory like a database, but with more details or attributes, describing entities
within the database.
The structure of an LDAP directory is a sort of tree layout and is optimized for retrieval of data more so
than writing. Think of it as being similar to a phone book being used for looking up data far more often
than making modifications to that data.
Directories can be hosted across lots of different LDAP servers to facilitate more rapid look ups, and
are kept in sync through replication of the directory. So what kind of data gets stored in directory entry,
exactly? Similar to an address book, an entry for a particular user will contain information pertaining to
that user account, like their first and last name, phone number, desk location, email address, login shell,
and other such data. Along with object attributes the location of an entry within the overall data
structure will represent information pertaining to the objects as relationships between objects.
Because LDAP uses a tree structure called a Data
Information Tree, objects will have one parent and can
have one or more children that belong to the parent object.
You can also think about this like a file system with a root
file system in folders under that. The folder an object
belongs to will provide information about that object
because of its relationship to the parent object. In LDAP
language, we call these folders Organizational Units, or
OUs. They let us group related objects into units like people
or groups to distinguish between individual user accounts
and groups that accounts can belong to. This tree structure also allows for inheritance and nesting of
objects, where attributes or properties of a parent object can be inherited by children further down the
tree.
Now, since it is possible for entries in the directory to share attributes, there must be a unique identifier
for each entry. We call this, Distinguished Name, or DN. Coming back to our file system analogy, you
can think of a DN as a full path to a file as opposed to a file name. This is because you can have
multiple files with the same file name across a file system but the fully qualified path to the file would
describe one unique file.
Some of the more common operations that can be called by a client to interact with an LDAP server are
bind, which is how clients authenticate to the server. StartTLS, which permits a client to communicate
using LDAP v3 over TLS. Search, for performing look ups and retrieval of records.
Add/delete/modify which are various operations to write data to the directory, and Unbind, which
closes the connection to the LDAP server. Now, there are many implementations of LDAP servers, like
Active Directory from Microsoft and OpenLDAP for open source implementations.

57
Radius
RADIUS or Remote Authentication Dial-In User Service, is a protocol that provides AAA services
for users on a network.
It's a very common protocol used to manage access to internal networks, WiFi networks, email services
and VPN services. Originally designed to transport authentication information for remote dial up users,
It's evolved to carry a wide variety of standard authentication protocols like EAP or Extensible
Authentication Protocol.
While it's unlikely that you'd be responsible for configuring RADIUS server as an IT support specialist,
you might be supporting clients that authenticate against a RADIUS back end server. In those cases, it's
good to understand what role the RADIUS server plays in this authentication scenario so you're better
prepared to troubleshoot issues that may come up.

Clients who want to authenticate to a RADIUS server don't directly interact with it. Instead, when a
client wants to access a resource that's protected, the client will present authentication credentials to a
NAS or Network Access Server which will relay the credentials to the RADIUS server.
The RADIUS server will then verify the credentials using a configured authentication scheme.
RADIUS servers can verify user authentication information stored in a flat file or can plug into external
sources like SQL databases, LDAP, Kerberos or Active Directory. Once the RADIUS server has
evaluated the user authentication request, it replies with one of three messages access reject, access
challenge or access accept.

58
Kerberos
Kerberos is a network authentication protocol that uses tickets to allow entities to prove their identity
over potentially insecure channels to provide mutual authentication.
It also uses symmetric encryption to protect protocol messages from eavesdropping and replay attacks.
The name Kerberos is taken from the Greek mythical character of the same name. A three headed guard
dog protecting the gates to Hades, the underworld. Seems like an appropriate choice for an
authentication protocol, don't you think? Kerberos was originally developed at the Massachusetts
Institute of Technology in the US, and was published in the 1980s as version four. Years later, in 1993,
version 5 was published.
Today Kerberos supports AES encryption, and implements checksums to ensure data integrity and
confidentiality. When joined to a Windows domain, Windows 2000 and newer versions will use
Kerberos as the default authentication protocol. Microsoft also implemented their own Kerberos
service with some modifications to the open protocol like the addition of the RC 4 Stream Cipher.
We mentioned tickets earlier which is a sort of token that proves your identity. They can be used for
authenticating to services protected using Kerberos or in other words or within the Kerberos realm. The
authentication tickets let users authenticate to services without requiring username and password
authentication for every service individually. A ticket will expire after some time, but it has provisions
for automatic transparent renewal of the ticket.
Let's run down the details of how the Kerberos protocol
operates. First, a user that wants to authenticate enters their
username and password on their client machine. Their
Kerberos client software, will then take the password and
generate a symmetric encryption key from it. Next, the client
sends a plain text message to the Kerberos, AS or
Authentication Server which includes the user ID of the
authenticating user. The password or secret key derive from
the password aren't transmitted. The AS uses the user ID to
check if there is an account in the authentication database,
like an active directory server.

If so the AS will generate the secret key using the


hashed password stored in the key distribution center
server (illustrated as KDC). The AS will then use the
secret key to encrypt and send a message containing
the client TGS session key. This is a secret key used
for encrypting communications with the Ticket
Granting Service or TGS, which is already known by
the Authentication Server. The AS also sends a second
message with a Ticket Granting Ticket or TGT, which is encrypted using the TGS secret key.

59
The Ticket Granting Ticket has information like the client ID, ticket validity period, and the client
taking granting service session key. So the first message can be decrypted using the shared secret key
derived from the user password. It then provides the secret key that can decrypt the second message
giving the client a valid Ticket Granting Ticket. Now, the client has enough information to authenticate
with the Ticket Granting Server.
Since the client has authenticated and received a valid Ticket Granting Ticket, it can use the Ticket
Granting Ticket to request access to services from within the Kerberos realm.

This is done by sending a message to the Ticket Granting


Service with the encrypted Ticket Granting Ticket received
from the AS earlier along with the service name or ID the
client is requesting access to.

The client also sends a message containing an authenticator which has the client ID and a time stamp
that's encrypted with the client Ticket Granting Ticket session key from the AS. The Ticket Granting
Service decrypts the Ticket Granting Ticket using the Ticket Granting Service secret key, which
provides the Ticket Granting Service with the client Ticket Granting Service session key. It then uses
the key to decrypt the authenticator message.
Next, it checks the client ID of these two messages to ensure they match. If they do, it sends two
messages back to the client.

The first one, contains the client to server ticket which is


comprised of the client ID, client address, validity period, and
the client-server session key encrypted using the service's
secret key.

The second message, contains the client-server session key itself, and is encrypted using the client
Ticket Granted Service session key. Finally, the client has enough information to authenticate itself to
the service server or SS.

The client sends two messages to the SS.


The first message is the encrypted client to
server ticket received from the Ticket
Granting Service.

60
The second is a new authenticator with the client ID and time stamp encrypted using the client-server
session key. The SS decrypts the first message using its secret key which provides it with the client-
server session key. The key is then used to decrypt the second message, and it compares the client ID in
the authenticator to the one included in the client to server ticket.

If these IDs match, then the SS sends a message


containing the time stamp from the client supplied
authenticator encrypted using the client-server
session key. The client, then decrypts this message,
and checks that the time stamp is correct,
authenticating the server.

If this all succeeds, then the server grants access to the requested service on the client.
Wow, okay, are you with me? I know that was a lot.
Kerberos has received some criticism because it's a single monolithic service. This creates a single
point of failure danger. If the Kerberos service goes down, new users won't be able to authenticate and
log in. Aside from the availability issues, if the central Kerberos server is compromised, the attacker
would be able to impersonate any user by generating valid Kerberos tickets for their user account.
Kerberos enforces strict time requirements requiring the client and server clocks to be relatively closely
synchronized, otherwise, authentication will fail. This is usually accomplished by using NTP to keep
both parties synchronized using an NTP server.
The trust model of Kerberos is also problematic, since it requires clients and services to have an
established trust in the Kerberos server in order to authenticate using Kerberos. This means, it's not
possible for users to authenticate using Kerberos from unknown or untrusted clients. So things like
BYOD or Bring Your Own Device, and cloud computing are incompatible, or at least very challenging
to implement securely with Kerberos authentication.
Now as an IT support specialist, you're likely to encounter Kerberos authentication especially, in
environments running Microsoft Active Directory. Understanding how the underlying protocol
functions will help when troubleshooting issues that may come with it.

61
TACAS+
TACACS (te-ka-axe)+. It stands for Terminal Access controller Access-Control System Plus.
It's a Cisco developed AAA protocol that was released as an open standard in 1993. It replaced the
older TACACS protocol developed in 1984 for MILNET. The unclassified network for DARPA, which
later evolved into NIPRNet. TACACS+ also took the place of XTACACS or Extended TACACS which
was a Cisco proprietary extension on top of TACACS.
TACACS+ is primarily used for device administration, authentication, authorization, and accounting,
as opposed to RADIUS, which is mostly used for network access AAA.
It's important to call out these differences in the characteristics of what these services provide. Though
the differences are primarily related to the authorization, and accounting portions, more so than
authentication. While you might not encounter a TACACS+ implementation in the course of your
support career, it's something that you should be aware of.
TACACS+ is mainly used as an authentication system for network infrastructure devices, which tend to
be high value targets for attackers. This may be something to consider implementing as your
organization grows.

Single Sign-On
Single Sign-On or SSO is an authentication concept that allows users to authenticate once to be
granted access to a lot of different services and applications.
Since re-authentication for each service isn't needed, users don't need multiple sets of usernames and
passwords across a mix of applications and services. SSO is accomplished by authenticating to a
central authentication server, like an LDAP server. This then provides a cookie, or token that can be
used to get access to applications configured to use SSO. Kerberos is actually a good example of an
SSO authentication service.

The user would authenticate


against the Kerberos service once,
which would then grant them a
ticket granting ticket. This can then
be presented to the ticket granting
service in place of traditional
credentials. So, the user can enter
credentials once and gain access to
a variety of services.

62
SSO is really convenient. It allows users to have one set of credentials that grant access to lots of
services, making it less likely that passwords will be written down or stored insecurely. This should
also reduce the overhead for password assistance support and removes time spent re-authenticating
throughout the workday. So, what's the downside?
Well, an attacker that manages to compromise an account has a lot more access under an SSO scheme.
User credentials will grant access to all applications and services that that account is permitted to
access. So, a big plug here for using multifactor authentication in conjunction with an SSO scheme. But
this opens a new channel of attack, theft of SSO session cookies or tokens. Instead of targeting
credentials directly, attackers can try to steal the SSO tokens directly which will permit wide access if
even for a short amount of time. Stealing these tokens, also lets an attacker dodge multifactor
authentication protections since the session token permits access without requiring full authentication
until the token expires.
An example of an SSO system is the
openID, decentralized authentication
system. This is an open standard that allows
participating sites known as Relying
Parties to allow authentication of users
utilizing a third party authentication service.
This allows sites to permit authentication
without requiring the site itself to have
authentication infrastructure, which can be
tricky to implement and maintain. It also
lets users access a site without requiring
them to create a new account, simplifying
access management across a wide variety of sites. Instead, a user just needs to already have an account
with an identity provider.
To ask for authentication, first the relying party looks
up the openID provider, then establishes a shared
secret with the provider, if one doesn't already exist.

This shared secret will be used to validate the openID provider messages. Then, the user will be
redirected or asked to authenticate in a new window through the identities provider's login flow.

Once authenticated, the user will be prompted to confirm if


they trust the relying party or not.

Once confirmed, credentials are relayed to


the relying party, typically in the form of a
token not actual user credentials which
indicates the user is now authenticated to the
service.

63
Authentication Quiz Questions and Answers
Question 1: How is authentication different from authorization?
Authentication is verifying an identity; authorization is verifying access to a resource.
Correct Right on! Authentication is proving that an entity is who they claim to be, while authorization is determining
whether or not that entity is permitted to access resources.

Question 2: What are some characteristics of a strong password? Check all that apply,
Is at least eight characters long & Includes numbers and special characters
Correct You got it! A strong password should contain a mix of character types and cases, and should be relatively long
-- at least eight characters, but preferably more.

Question 3: In a multi-factor authentication scheme, a password can be thought of as:


something you know.
Correct, Wohoo! Since a password is something you memorize, it's something you know when talking about multi-
factor authentication schemes.

Question 4: What are some drawbacks to using biometrics for authentication? Check all that apply.
Biometric authentication is difficult or impossible to change if compromised. There are potential privacy
concerns.
Correct. That's exactly right! If a biometric characteristic, like your fingerprints, is compromised, your option for
changing your "password" is to use a different finger. This makes "password" changes limited. Other biometrics, like
iris scans, can't be changed if compromised. If biometric authentication material isn't handled securely, then
identifying information about the individual can leak or be stolen.

Question 5: In what way are U2F tokens more secure than OTP generators?
They're resistant to phishing attacks.
Correct. Great job! With one-time-password generators, the one-time password along with the username and password
can be stolen through phishing. On the flip side, U2F authentication is impossible to phish, given the public key
cryptography design of the authentication protocol.

Question 6: What elements of a certificate are inspected when a certificate is verified? Check all that apply.
Trust of the signatory CA. “Not valid after” date. “not valid before” date.
Correct. Yep! To verify a certificate, the period of validity must be checked, along with the signature of the signing
certificate authority, to ensure that it's a trusted one.

64
Question 7: What is a CRL?
Certificate Revocation List
Correct. Good job! CRL stands for "Certificate Revocation List." It's a list published by a CA, which contains
certificates issued by the CA that are explicitly revoked, or made invalid.

Question 8: What are the names of similar entities that a Directory server organizes entities into?
Organizational Units
Correct. Awesome! Directory servers have organizational units, or OUs, that are used to group similar entities.

Question 9: True or false: The Network Access Server handles the actual authentication in a RADIUS scheme.
False
Correct. Nice work! The Network Access Server only relays the authentication messages between the RADIUS server
and the client; it doesn't make an authentication evaluation itself.

Question 10: True or false: Clients authenticate directly against the RADIUS server.
False
Correct! Clients don't actually interact directly with the RADIUS server; the authentication is relayed via the Network
Access Server.

Question 11: What does a Kerberos authentication server issue to a client that successfully authenticates?
A ticket-granting ticket
Exactly! Once authenticated, a Kerberos client receives a ticket-granting ticket from the authentication server. This
TGT can then be presented to the ticket-granting service in order to be granted access to a resource.

Question 12: What advantages does single sign-on offer? Check all that apply.
It reduces time spent authenticating. It reduces the total number of credentials.
You nailed it! SSO allows one set of credentials to be used to access various services across sites. This reduces the
total number of credentials that might be otherwise needed. SSO authentication also issues an authentication token
after a user authenticates using username and password. This token then automatically authenticates the user until the
token expires. So, users don't need to reauthenticate multiple times throughout a work day.

Question 13: What does OpenID provide?


Authentication delegation
Yep! OpenID allows authentication to be delegated to a third-party authentication service.

65
Authorization
Authorization and Access Control Methods
Earlier, we covered authentication, the first component of the three A's of security. Next up, we'll cover
authorization, which is usually tightly coupled with authentication.
Now, while authentication is related to verifying the identity a user, authorization pertains to
describing what the user account has access to or doesn't have access to. These are separate and distinct
components of AAA that have different purposes.
A user may successfully authenticate to a system by presenting valid credentials but if the username
they authenticated as isn't also authorized to access the system in question, they'll be denied access.
When we talked about Kerberos earlier, the
user authenticated and received a ticket-
granting ticket. This can then be used to
request access to a specific service by
sending a request to the ticket-granting
service. This is when authorization comes
into play, since the ticket-granting service
will decide whether or not the user in
question is permitted to access the service
being requested.
If they're not permitted or authorized to
access the service, the request will be denied
by the ticket-granting service. If the user is
authorized, the ticket-granting service would
return a ticket, which authorized the user to
access the service.
One very popular open standard for authorization and access delegation is OAuth, used by companies
like Google, Facebook, and Microsoft. We'll go in to OAuth in the next video.

Access Control
OAuth is an open standard that allows users to grant third-party websites and applications access to
their information without sharing account credentials.
This can be thought of as a form of access delegation because access to the user's account is being
delegated to the third party.
This is accomplished by prompting the user to confirm that they agree to permit the third party access
to certain information about their account. Typically, this prompt will specifically list which pieces of
information or access are being requested. Once confirmed, the identity provider will supply the third
party with a token that gives them access to the user's information. This token can then be used by the
third party to access data or services offered by the identity provider directly on behalf of the user.

66
OAuth is commonly used to grant access to third party applications, to APIs offered by large internet
companies like Google, Microsoft, and Facebook. Let's say you want to use a third party meme
creation website. This website lets you create memes using templates and gives you the option to save
your creations and email them to your friends. Instead of the site sending the emails directly, which
would appear to be coming from an address your friends wouldn't recognize, the site uses OAuth to get
permission to send the memes using your email account directly.
This is done by making an OAuth request to your email
provider. Once you approve this request, the email provider
issues an access token to the site, which grants the site
access to your email account.
The access token would have a scope, which says that it
can only be used to access email, not other services
associated with the account. So it can access email but not
your cloud storage files or calendar, for example.
It's important that users pay attention to what third party is requesting access and what exactly they're
granting access to. OAuth permissions can be used in phishing style attacks to gain access to accounts
without requiring credentials to be compromised. This works by sending phishing emails to potential
victims that look like legitimate OAuth authorization requests, which ask the user to grant access to
some aspects of their account through OAuth. Once the user grants access, the attacker has access to
the account through the OAuth authorization token.
This was used in an OAuth based worm attack in early 2017. There was a rash of phishing emails that
appeared to be from a friend or colleague who wanted to share a google doc. When the sharing link was
followed, the victim was prompted to log in and authorize access to email documents and contacts for
some third party service, which only identified itself as the name Google Apps. But it was actually a
malicious service that would then email contacts from their email account perpetuating the attack. In
case you like to read more about it, go to https://www.theverge.com/2017/5/3/15534768/google-docs-
phishing-attack-share-this-document-with-you-spam
It's important to distinguish between OAuth and
OpenID. OAuth is specifically an authorization
system and OpenID is an authentication system.
Though they're usually used together, OpenID
Connect is an authentication layer built on top of
OAuth 2.0 designed to improve upon OpenID and
build better integration with OAuth authorizations.

Since TACACS plus is a full AAA system, it also


handles authorization along with authentication. This is done once a user is authenticated by allowing
or disallowing access for the user account to run certain commands or access certain devices. This lets
you not only allow admin access for users that administer devices while still allowing less privileged
access to other users when necessary.

67
Here's an example, since your networking teams are responsible for configuring and maintaining your
network switches, routers, and other infrastructure. You'd give them admin access to your network and
equipment. Meanwhile, you can have limited read-only access to your support team since they don't
need to be able to make changes to switch configurations in their jobs. Read-only access is enough for
them to troubleshoot problems. The rest of the user accounts would have no access at all and wouldn't
be permitted to connect to the networking infrastructure. So more sophisticated or configurable AAA
systems may even allow further refinement of authorization down to the command level. This gives
you much more flexibility in how your access is granted to specific users or groups in your
organization.
RADIUS also allows you to authorize network access. For example, you may want to permit some
users to have Wi-Fi and VPN access while others may not need this. When they authenticate to the
RADIUS server, if the authentication succeeds, the RADIUS server returns configuration information
to the network access server. This includes authorizations which specifies what network services the
user is permitted to access.

Access Control List


An access control list or ACL, is a way of defining permissions or authorizations for objects.
The most common case you may encounter deals with file system
permissions. A file system would have an ACL, which is a table or
database with a list of entries specifying access rights for
individuals or groups for various objects on the file system like
folders, files or programs.These individual access permissions per
object are called Access Control Entries and they make up the
ACL. Individual entries can define permissions controlling
whether or not a user or group can read, write or execute objects.
ACLs are also used extensively in network security, applying access controls to routers switches and
firewalls. Network ACLs are used for restricting and controlling access to hoster services running on
hosts within your network. Network ACLs can be defined for incoming and outgoing traffic. They can
also be used to restrict external access to systems and limit outgoing traffic to enforce policies or to
prevent unauthorized outbound data transfers. We'll deep dive more into network ACLs in future
lessons when we cover network security in more detail.

68
Accounting
Tracking Usage and Access
Last but not least, the final A of the Triple A's of security is Accounting. This means, keeping records
of what resources and services your users access or what they did when they were using your systems.
A critical component of this is auditing, which involves reviewing these records to ensure that nothing
is out of the ordinary. If we're watching and recording usage of our systems, but never actually
checking the usage data, that's not super useful.
So what exactly do counting systems keep track of? Well, that depends on the purpose and intent of the
system.
For example, a TACACS+ server would be more concerned with keeping track of user authentication,
what systems they authenticated to, and what commands they ran during their session. This is because
TACACS+ plus is a device access AAA system that manages who has access to your network devices
and what they do on them.
CISCO's AAA system supports accounting of individual commands executed, connection to and from
network devices, commands executed in privileged mode, and network services and system details like
configuration reloads or reboots.
RADIUS will track details like session
duration, client location and bandwidth,
or other resources used during the
session. This is because radius is a
network access AAA system. So it tracks
details about network access and usage.
RADIUS accounting kicks off with the
Network Access Server sending an
accounting request packet to the
accounting server that contains an event
record to be logged. This starts the
accounting session on the server. The
server replies with an accounting
response indicating that the message was
received. The NAS will continue sending
periodic accounting messages with
statistics of the session until an
accounting stop packet is received.
RADIUS accounting can be used for billing purposes by ISPs because it records the length of a session
and the amount of data sent and received by the user. This data can also be used to enforce data or time
quotas, limiting the duration of sessions or restricting the amount of data that can be sent or received.
But, this accounting information isn't detailed and won't contain specifics of what exactly the user did
during the session. Information like websites visited or what protocols were used aren't recorded. Other
logging utilities that will cover later meet that use case. You can head to the next video, where we'll
dive into network security, monitoring, and logging.

69
Chapter 3 Questions and Answers
Question 1: Authn is short for ________.
Authentication
Yep! Authentication is sometimes referred to as "authn" for short.

Question 2: Authz is short for ________.


Authorization
You got it! Authorization is sometimes referred to as "authz" for short.

Question 3: Authentication is concerned with determining _______.


Identity
Correct. Authentication is concerned with confirming the identities of individuals.

Question 4: Authorization is concerned with determining ______ to resources.


Access
Correct! Authorization deals with determining access to resources.

Question 5: Which of the following are valid multi-factor authentication factors? Check all that apply.
Something you know. Something you are. Something you have.
Correct. The three factors of authentication that can be combined for multi-factor authentication are: (1)
something you know, like a password; (2) something you have, like a physical token; and (3)
something you are, which would be a biometric factor.

Question 6: The two types of one-time-password tokens are ______ and ______. Check all that apply.
Counter-based. Time-based.
Correct! An OTP generator token can be counter-based, where a counter is incremented on the token
and the server upon successful authentication.

70
Question 7: Security Keys are more ideal than OTP generators because they're resistant to _______
attacks.
Phishing
Yep! Where the OTP code can be phished, security keys rely on a challenge response system which
prevents phishing attacks.

Question 8: Security Keys utilize a secure challenge-and-response authentication system, which is


based on ________.
Public key cryptography
Correct! Security keys use public key cryptography to perform a secure challenge response for
authentication.

Question 9: In addition to the client being authenticated by the server, certificate authentication also
provides ______.
Server authentication
Exactly! The client will validate the server's certificate, thereby providing server authentication and
client authentication.

Question 10: Kerberos uses _____ as authentication tokens.


Tickets
Great work! Kerberos issues tickets, which represent authentication and authorization tokens.

Question 11: The authentication server is to authentication as the ticket granting service is to _______.
Authorization
Correct! Where the authentication server handles authentication of the user, the ticket granting service
determines authorization for a given service.

71
Chapter 4: Securing Your Networks
In the fourth chapter of this course, we'll learn about secure network architecture. It's important to
know how to implement security measures on a network environment, so we'll show you some of the
best practices to protect an organization's network. We'll learn about some of the risks of wireless
networks and how to mitigate them. We'll also cover ways to monitor network traffic and read packet
captures.
By the end of this module, you'll understand how VPNs, proxies and reverse proxies work; why
802.1X is a super important for network protection; understand why WPA/WPA2 is better than WEP;
and know how to use tcpdump to capture and analyze packets on a network. That's a lot of information,
but well worth it for an IT Support Specialist to understand!
Learning Objectives:
• Implement security measures on a network environment.
• Understand the risks of wireless networks and how to mitigate them.
• Understand how to monitor network traffic and read packet captures.

72
Secure Network Architecture
Network Hardening Best Practices
Congrats on getting this far, you're over halfway through the course, and so close to completing the
program.
In this section, we'll cover ways few to harden your networks. Network hardening is the process of
securing a network by reducing its potential vulnerabilities through configuration changes, and taking
specific steps.
In the next few lessons, we'll do a deep dive on the best practices that an IT support specialist should
know for implementing network hardening. We'll also discuss network security protection along with
network monitoring and analysis.
There's a general security principle that can be applied to most areas of security, it's the concept of
disabling unnecessary extra services or restricting access to them. Since any service that's enabled and
accessible can be attacked, this principle should be applied to network security too. Networks would be
much safer if you disable access to network services that aren't needed and enforce access restrictions.
Implicit deny is a network security concept where anything not explicitly permitted or allowed should
be denied. This is different from blocking all traffic, since an implicit deny configuration will still let
traffic pass that you've defined as allowed, you can do this through ACL configurations. This can
usually be configured on a firewall which makes it easier to build secure firewall rules. Instead of
requiring you to specifically block all traffic you don't want, you can just create rules for traffic that
you need to go through. You can think of this as whitelisting, as opposed to blacklisting. While this is
slightly less convenient, it's a much more secure configuration. Before a new service will work, a new
rule must be defined for it reducing convenience a bit. If you want to learn more about how to
configure a firewall rules and Linux and other implementations, take a look at the references in the
supplementary reading.
Another very important component of network security is monitoring and analyzing traffic on your
network. There are a couple of reasons why monitoring your network is so important. The first is that it
lets you establish a baseline of what your typical network traffic looks like. This is key because in order
to know what unusual or potential attack traffic looks like, you need to know what normal traffic looks
like. You can do this through network traffic monitoring and logs analysis. We'll dive deeper into what
network traffic monitoring is a bit later, but let's quickly summarize how laws can be helpful in this
context. Analyzing logs is the practice of collecting logs from different network and sometimes client
devices on your network, then performing an automated analysis on them. This will highlight potential
intrusions, signs of malware infections or atypical behavior. You'd want to analyze things like firewall
logs, authentication server logs, and application logs. As an IT support specialist, you should pay close
attention to any external facing devices or services. They're subject to a lot more potentially malicious
traffic which increases the risk of compromise. Analysis of logs would involve looking for specific log
messages of interests, like with firewall logs. Attempted connections to an internal service from an
untrusted source address may be worth investigating. Connections from the internal network to known
address ranges of Botnet command and control servers could mean there's a compromised machine on
the network. As you learned in earlier courses of this program, log and analysis systems are a best
practice for IT supports specialists to utilize and implement. This is true too for network hardening.

73
Logs analysis systems are configured using user-defined rules to match interesting or atypical log
entries. These can then be surfaced through an alerting system to let security engineers investigate the
alert. Part of this alerting process would also involve categorizing the alert, based on the rule matched.
You'd also need to assign a priority to facilitate this investigation and to permit better searching or
filtering. Alerts could take the form of sending an email or an SMS with information, and a link to the
event that was detected. You could even wake someone up in the middle of the night if the event was
severe enough.
Normalizing logged data is an important step, since logs from different devices and systems may not
be formatted in a common way. You might need to convert log components into a common format to
make analysis easier for analysts, and rule-based detection systems, this also makes correlation analysis
easier. Correlation analysis is the process of taking log data from different systems, and matching
events across the systems. So, if we see a suspicious connection coming from a suspect source address
and the firewall logs to our authentication server, we might want to correlate that logged connection
with the log data of the authentication server. That would show us any authentication attempts made by
the suspicious client. This type of logs analysis is also super important in investigating and recreating
the events that happened once a compromise is detected. This is usually called a post-fail analysis,
since it's investigating how a compromise happened after the breach is detected. Detailed logging and
analysis of logs would allow for detailed reconstruction of the events that led to the compromise.
Hopefully, this will let the security team make appropriate changes to security systems to prevent
further attacks. It could also help determine the extent and severity of the compromise. Detailed
logging would also be able to show if further systems were compromised after the initial breach. It
would also tell us whether or not any data was stolen, and if it was, what that data was. One popular
and powerful logs analysis system is Splunk, a very flexible and extensible log aggregation and search
system. Splunk can grab logs data from a wide variety of systems, and in large amounts of formats. It
can also be configured to generate alerts, and allows for powerful visualization of activity based on
logged data. You can read more about Splunk and the supplementary readings in this lesson (noted at
the end of this lesson).
Flood guards provide protection against Dos or denial of service attacks. Think back to the CIA triad
we covered earlier, availability is an important
tenet of security and is exactly what Flood
guard protections are designed to help ensure.
This works by identifying common flood
attack types like sin floods or UDP floods. It
then triggers alerts once a configurable
threshold of traffic is reached. There's another
threshold called the activation threshold. When
this one is reached, it triggers a pre-configured
action. This will typically block the identified
attack traffic for a specific amount of time.
This is usually a feature on enterprise grade
routers or firewalls, though it's a general security concept. A common open source flood guard
protection tool is failed to ban. It watches for signs of an attack on a system, and blocks further
attempts from a suspected attack address. Fail to ban is a popular tool for smaller scale organizations.
So, if you're the sole IT support specialist in your company or have a small fleet of machines, this can
be a helpful tool to use. This flood guard protection can also be described as a form of intrusion
prevention system, which we'll cover in more detail in another video.

74
Network separation or network
segmentation is a good security principle for
an IT support specialists to implement. It
permits more flexible management of the
network, and provides some security benefits.
This is the concept of using VLANs to create
virtual networks for different device classes or
types. Think of it as creating dedicated virtual
networks for your employees to use, but also
having separate networks for your printers to
connect to. The idea here is that the printers
won't need access to the same network
resources that employees do. It probably
doesn't make sense to have the printers on the
employee network. You might be wondering
how employees are supposed to print if the
printers are on a different network. It's actually one of the benefits of network separation, since we can
control and monitor the flow of traffic between networks more easily. To give employees access to
printers, we'd configure routing between the two networks on our routers. We'd also implement
network ACLs that permit the appropriate traffic.
Cisco IOS firwall rules @
https://www.cisco.com/c/en/us/td/docs/security/security_management/cisco_security_manager/security
_manager/4-1/user/guide/CSMUserGuide_wrapper/fwaccess.html
Juniper firewall rules @ https://www.juniper.net/documentation/en_US/junos/topics/usage-
guidelines/services-configuring-stateful-firewall-rules.html
Iptables firewall rules @ https://www.digitalocean.com/community/tutorials/iptables-essentials-
common-firewall-rules-and-commands
UFW firewall rules @ https://www.digitalocean.com/community/tutorials/ufw-essentials-common-
firewall-rules-and-commands
Configuring Mac OS X firewall @ https://support.apple.com/en-us/HT201642
Microsoft firewall rules @ https://docs.microsoft.com/en-us/previous-versions/windows/it-
pro/windows-server-2008-R2-and-2008/cc754274(v=ws.11)

Network Hardware Hardening


In this video, we'll cover some ways that an IT Support Specialist can implement network hardware
hardening. We talked about general network hardening, and now we're going to dive deeper into more
specific tools and techniques for hardening a network. We'll pay close attention to features and options
available on networking infrastructure hardware.

75
In an earlier lesson on networking, we explored DHCP. It's the
protocol where devices on a network are assigned critical
configuration information for communicating on the network.
You also learned about configuring DHCP in another course of this
program. So, you can see how DHCP is a target of attackers because
of the important nature of the service it provides. If an attacker can
manage to deploy a rogue DHCP server on your network, they could
hand out DHCP leases with whatever information they want. This
includes setting a gateway address or DNS server, that's actually a
machine within their control. This gives them access to your traffic
and opens the door for future attacks. We call this type of attack a rogue DHCP server attack. To
protect against this rogue DHCP server attack, enterprise switches offer a feature called DHCP
snooping.
A switch that has DHCP snooping will monitor DHCP
traffic being sent across it. It will also track IP
assignments and map them to hosts connected to switch
ports. This basically builds a map of assigned IP addresses
to physical switch ports. This information can also be used
to protect against IP spoofing and ARP poisoning attacks.
DHCP snooping also makes you designate either a trusted
DHCP server IP, if it's operating as a DHCP helper, and
forwarding DHCP requests to the server, or you can
enable DHCP snooping trust on the uplink port, where
legitimate DHCP responses would now come from. Now
any DHCP responses coming from either an untrusted IP address or from a downlink switch port would
be detected as untrusted and discarded by the switch.
Let's talk about another form of network hardware hardening, Dynamic ARP inspection. We covered
ARP earlier from the how does it function standpoint. ARP allows for a layer to men-in-the-middle
attack because of the unauthenticated nature of ARP. It allows an attacker to forge an ARP response,
advertising its MAC address as the physical
address matching a victim's IP address. This
type of ARP response is called a gratuitous ARP
response, since it's effectively answering a
query that no one made. When this happens, all
of the clients on the local network segment
would cache this ARP entry. Because of the
forged ARP entry, they send frames intended
for the victim's IP address to the attacker's
machine instead. The attacker could enable IP
forwarding, which would let them transparently
monitor traffic intended for the victim. They
could also manipulate or modify data. Dynamic
ARP inspection or DAI is another feature on enterprise switches that prevents this type of attack. It
requires the use of DHCP snooping to establish a trusted binding of IP addresses to switch ports. DAI
will detect these forged gratuitous ARP packets and drop them. It does this because it has a table from

76
DHCP snooping that has the authoritative IP address assignments per port. DAI also enforces great
limiting of ARP packets per port to prevent ARP scanning. An attacker is likely to ARP scan before
attempting the ARP attack.
To prevent IP spoofing attacks, IP source guard or IPSG can be enabled on enterprise switches along
with DHCP snooping. If you're an IT Support Specialist at a small company that uses enterprise-class
switch hardware, you'll probably utilize IPSG. It works by using the DHCP snooping table to
dynamically create ACLs for each switch port. This drops packets that don't match the IP address for
the port based on the DHCP snooping table.
Now, if you really want to lock down your network, you can implement 802.1X. We've added details
about how to configure this in the supplementary reading but for now, let's discuss this at a high level.
It's important for an IT Support Specialist to be aware of 802.1X. This is the IEEE standard for
encapsulating EAP or Extensible Authentication Protocol traffic over the 802 networks. This is also
called EAP over LAN or EAPOL, it was originally designed for Ethernet but support was added for
other network types like Wi-Fi and fiber networks. We won't go into the details of all EAP
authentication types supported. There are about 100 compatible types, so it would take way too long.
But we'll take a closer look at EAP-TLS since it's one of the more common and secure EAP methods.

When a client wants to authenticate to a network using 802.1X, there are three parties involved. The
client device is what we call the supplicant. It's sometimes also used to refer to the software running on
the client machine that handles the authentication process for the user. The open source Linux utility
wpa_supplicant is one of those. The supplicant communicates with the authenticator, which acts as a
sort of gatekeeper for the network. It requires clients to successfully authenticate to the network before
they're allowed to communicate with the network. This is usually an enterprise switch or an access
point in the case of wireless networks. It's important to call out that while the supplicant communicates
with the authenticator, it's not actually the authenticator that makes the authentication decision. The
authenticator acts like a go between and forwards the authentication request to the authentication
server. That's where the actual credential verification and authentication occurs. The authentication
server is usually a RADIUS server. EAP-TLS is an authentication type supported by EAP that uses
TLS to provide mutual authentication of both the client and the authenticating server. This is
considered one of the more secure configurations for wireless security, so it's definitely possible that
you'll encounter this authentication type in your IT career. Like with many of these protocols,
understanding how it works can help you if you need to troubleshoot.

77
You might remember from Course 4 that HTTPS is a
combination of the hypertext transfer protocol, HTTP, with
SSL-TLS cryptographic protocols. When TLS is
implemented for HTTPS traffic, it specifies a client's
certificate as an optional factor of authentication.
Similarly, most EAP-TLS implementations require client-
side certificates. Authentication can be certificate-based,
which requires a client to present a valid certificate that's
signed by the authenticating CA, or a client can use a
certificate in conjunction with a username, password, and
even a second factor of authentication, like a one-time password.
The security of EAP-TLS stems from the inherent security that the TLS protocol and PKI provide. That
also means that the pitfalls are the same when it comes to properly managing PKI elements. You have
to safeguard private keys appropriately and ensure distribution of the CA certificate to client devices to
allow verification of the server-side. An even more secure configuration for EAP-TLS would be to bind
the client-side certificates to the client platforms using TPMs. This would prevent theft of the
certificates from client machines. When you combine this with FDE, even theft of a computer would
prevent compromise of the network. We're covering a lot of complex processes right now, so feel free
to watch this video again so that the material really sinks in. If you're really interested in implementing
these processes yourself or want to dive into even more details about how it all works, check out the
supplementary readings for this lesson. Keep in mind, as an IT Support Specialist, you don't need to
know every single step-by-step detail here. Knowing what these processes are and how they work can
be very beneficial while troubleshooting and evaluating infrastructure security.
Link to supplementary reading for IEEE 802-1x @ https://en.wikipedia.org/wiki/IEEE_802.1X

Network Software Hardening


In the last lesson, we covered network hardware hardening security measures. Which you should be
aware of as an IT support specialist.
Now, we're going to shift to Network software hardening techniques. Just like with network hardware
hardening, it is important for you to know how to implement network software hardening, which
includes things like firewalls, proxies, and VPNs.These security software solutions will play an
important role in securing networks and their traffic for your organization. Like we mentioned before,
firewalls are critical to securing a network. They can be deployed as dedicated network infrastructure
devices, which regulate the flow of traffic for a whole network. They can also be host-based as
software that runs on a client system providing protection for that one host only. It's generally
recommended to deploy both solutions.
A host-based firewall provides protection for mobile devices such
as a laptop that could be used in an untrusted, potentially malicious
environment like an airport Wi-Fi hotspot. Host-based firewalls
are also useful for protecting other hosts from being compromised,
by corrupt device on the internal network. That's something a
network-based firewall may not be able to help defend against.

78
You will almost definitely encounter host-based firewalls since all major operating systems have built
in ones today. It's also very likely that your company will have some kind of network-based firewall.
Your router at home even has a network-based firewall built in.
VPNs are also recommended to provide secure access to internal resources for mobile or roaming
users. We went over the details of VPNs and how they work in securing network traffic. If you need a
refresher, feel free to revisit that again. We won't go back over all the details, but here's a quick
rundown. VPNs are commonly used to provide secure remote access, and link two networks securely.
Let's say we have two offices located in
buildings that are on opposite sides of town.
We want to create one unified network that
would let users in each location, seamlessly
connect to devices and services in either
location. We could use a site to site VPN to
link these two offices.
To the people in the offices, everything would
just work. They'd be able to connect to a
service hosted in the other office without any
specific configuration. Using a VPN tunnel, all traffic between the two offices can be secured using
encryption. This lets the two remote networks join each other seamlessly. This way, clients on one
network can access devices on the other without requiring them to individually connect to a VPN
service. Usually, the same infrastructure can be used to allow remote access VPN services for
individual clients that require access to internal resources while out of the office.
Proxies can be really useful to protect client devices and their traffic. They also provide
secure remote access without using a VPN. A standard web proxy can be configured for
client devices. This allows web traffic to be proxied through a proxy server that we
control for lots of purposes. This configuration can be used for logging web requests of
client devices. The devices can be used for logs, and traffic analysis, and forensic
investigation. The proxy server can be configured to block content that might be
malicious, dangerous, or just against company policy.
A reverse proxy can be configured to allow secure remote access to
web based services without requiring a VPN. Now, as an IT. support
specialist, you may need to configure or maintain a reverse proxy
service as an alternative to VPN. By configuring a reverse proxy at the
edge of your network, connection requests to services inside the network coming from
outside, are intercepted by the reverse proxy. They are then forwarded on to the
internal service with the reverse proxy acting as a relay. This bridges communications
between the remote client outside the network and the internal service. This proxy
setup can be secured even more by requiring the use of client TLS certificates, along
with username and password authentication. Specific ACLs can also be configured on
the reverse proxy to restrict access even more. Lots of popular proxy solutions support
a reverse proxy configuration like HAProxy, Nginx, and even the Apache Web
Server. You can read more about these popular proxy solutions in the supplemental readings. Next up,
let's take a practice quiz to secure the network architecture terms we've just discussed.
http://www.haproxy.org/#docs,%20http://cbonte.github.io/haproxy-dconv/1.8/intro.html#3.3.1

79
Secure Network Architecture
Question 1:Why is normalizing log data important in a centralized logging setup?
Uniformly formatted logs are easier to store and analyze.
Nice work! Logs from various systems may be formatted differently. Normalizing logs is the practice of reformatting
the logs into a common format, allowing for easier storage and lookups in a centralized logging system.

Question 2: What type of attacks does a flood guard protect against? Check all that apply.
DDoS attacks & SYN floods
You got it! A flood guard protects against attacks that overwhelm networking resources, like DoS attacks and SYN
floods.

Question 3: What does DHCP Snooping protect against?


Rogue DHCP server attacks
Good job! DHCP snooping is designed to guard against rogue DHCP attacks. The switch can be configured to
transmit DHCP responses only when they come from the DHCP server's port.

Question 4: What does Dynamic ARP Inspection protect against?


ARP poisoning attacks
That's exactly right! Dynamic ARP inspection protects against ARP poisoning attacks by watching for ARP packets. If
an ARP packet doesn't match the table of MAC address and IP address mappings generated by DHCP snooping, the
packet will be dropped as invalid or malicious.

Question 5: What does IP Source Guard protect against?


IP spoofing attacks
Correct, right on! IP Source Guard prevents an attacker from spoofing an IP address on the network. It does this by
matching assigned IP addresses to switch ports, and dropping unauthorized traffic.

Question 6: What does EAP-TLS use for mutual authentication of both the server and the client?
Digital certificates
Correct! The client and server both present digital certificates, which allows both sides to authenticate the other,
providing mutual authentication.

Question 7: Why is it recommended to use both network-based and host-based firewalls? Check all that apply.
For protection against compromised hosts on the same network & for protection for mobile devices, like
laptops.
Correct, nice job! Using both network- and host-based firewalls provides protection from external and internal threats.
This also protects hosts that move between trusted and untrusted networks, like mobile devices and laptops.

80
Wireless Security
WEP Encryption and Why You Shouldn’t Use it
In this lesson, we'll cover the best practices for implementing wireless security. As an IT support
specialist, you'll be responsible for WiFi configuration and infrastructure. So understanding the security
options available for wireless networks is super important to making sure that the best solution is
chosen.
We already covered the nuts and bolts of the wireless 802.11 protocol and explained how wireless
networks work, so we won't rehash that but we'll take a closer look at the security implementations
available to protect wireless networks. Before we jump into the nitty-gritty details of wireless security,
take a second and ask yourself this question, what do you think the best security option is for
securing a WiFi network?
It's okay if you're not sure, just keep this question in mind as we go over all the options available, along
with their benefits and drawbacks. Spoiler alert, there's some pretty technical security stuff coming
your way, so put your thinking caps on.
The first security protocol introduced for Wi-Fi networks was WEP or Wired Equivalent Privacy. It
was part of the original 802.11 standard introduced back in 1997. WEP was intended to provide privacy
on par with the wired network, that means the information passed over the network should be protected
from third parties eavesdropping. This was an important consideration when designing the wireless
specification.
Unlike wired networks, packets could be intercepted by anyone with physical proximity to the access
point or client station. Without some form of encryption to protect the packets, wireless traffic would
be readable by anyone nearby who wants to listen. WEP was proven to be seriously bad at providing
confidentiality or security for wireless networks. It was quickly discounted in 2004 in favor of more
secure systems. Even so, we'll cover it here for historical purposes. I want to drive home the point that
no one should be using WEP anymore! You never know, you may see seriously outdated systems
when working as an IT support specialist. So it's important that you fully understand why WEP is
outdated and what you can do instead.
WEP use the RC4 symmetric stream cipher for
encryption. It used either a 40-bit or 104-bit
shared key where the encryption key for
individual packets was derived. The actual
encryption key for each packet was computed by
taking the user-supplied shared key and then
joining a 24-bit initialization vector or IV for
short. It's a randomized bit of data to avoid
reusing the same encryption key between
packets. Since these bits of data are concatenated
or joined, a 40-bit shared key scheme uses a 64-
bit key for encryption and the 104-bit scheme uses a 128-bit key. Originally, WEP encryption was
limited to 64-bit only because of US export restrictions placed on encryption technologies. Now once
those laws were changed, 128-bit encryption became available for use.

81
The shared key was entered as either 10 hexadecimal characters for 40-bit WEP, or 26 hex characters
for 104-bit WEP. Each hex character was 4-bits each. The key could also be specified by supplying 5
ASCII characters or 13, each ASCII character representing 8-bits. But this actually reduces the
available keyspace to only valid ASCII characters instead of all possible hex values. Since this is a
component of the actual key, the shared key must be exactly as many characters as appropriate for the
encryption scheme.
WEP authentication originally supported two different modes, Open System authentication and
Shared Key authentication. The open system mode didn't require clients to supply credentials.
Instead, they were allowed to authenticate and associate with the access point but the access point
would begin communicating with the client encrypting data frames with the pre-shared WEP key. If the
client didn't have the key or had an incorrect key, it wouldn't be able to decrypt the frames coming from
the access point or AP. It also wouldn't be able to communicate back to the AP.
Shared key authentication worked by requiring clients to
authenticate through a four-step challenge response
process. This basically has the AP asking the client to
prove that they have the correct key. Here's how it
works. The client sends an authentication request to the
AP. The AP replies with clear text challenge, a bit of
randomized data that the client is supposed to encrypt
using the shared WEP key. The client replies to the AP
with the resulting ciphertext from encrypting this
challenge text. The AP verifies this by decrypting the
response and checking it against the plain text challenge
text. If they match, a positive response is sent back.
Does anything jump out at you as potentially insecure in the scheme? We're transmitting both the plain
text and the ciphertext in a way that exposes both of these messages to potential eavesdroppers. This
opens the possibility for the encryption key to be recovered by the attacker.
A general concept in security and encryption is to never send the plain text and ciphertext together, so
that attackers can't work out the key used for encryption. But WEP's true weakness wasn't related to the
authentication schemes, its use of the RC4 stream cipher and how the IVs were used to generate
encryption keys led to WEP's ultimate downfall. The primary purpose of an IV is to introduce more
random elements into the encryption key to avoid reusing the same one. When using a stream cipher
like RC4, it's super important that an encryption key doesn't get reused. This would allow an attacker to
compare two messages encrypted using the same key and recover information. But the encryption key
in WEP is just made up of the shared key, which doesn't change frequently. It had 24-bits of
randomized data, including the IV tacked on to the end of it. This results in only a 24-bit pool where
unique encryption keys will be pulled from and used. Since the IV is made up of 24-bits of data, the
total number of possible values is not very big by modern computing standards. That's only about 17
million possible unique IVs, which means after roughly 5,000 packets, an IV will be reused. When an
IV is reused, the encryption key is also reused. It's also important to call out that the IV is transmitted in
plain text. If it were encrypted, the receiver would not be able to decrypt it. This means an attacker just
has to keep track of IVs and watch for repeated ones. The actual attack that lets an attacker recover the
WEP key relies on weaknesses in some IVs and how the RC4 cipher generates a keystream used for
encrypting the data payloads. This lets the attacker reconstruct this keystream using packets encrypted
using the weak IVs. The full paper detailing the attack is available in the supplementary readings if you want to check it out.

82
You could also take a look at open source tools that demonstrate this attack in action, like Aircrack-ng
or AirSnort, they can recover a WEP key in a matter of minutes, it's kind of terrifying to think about. So
now you've heard the technical reasons why WEP is inherently vulnerable to attacks. In the next video,
we'll talk about the solution that replaced WEP. But before we get there, you might be asking yourself
why it's important to know WEP, since it's not recommended for use anymore. Well, as an IT support
specialist, you might encounter some cases where legacy hardware is still running WEP. It's important
to understand the security implications of using this broken security protocol so you can prioritize
upgrading away from WEP. All right, now let's dive into the replacement for WEP in the next video.

Let’s Get Rid of WEP! WPA/WPA2


The replacement for WEP from the Wi-Fi Alliance was WPA or Wi-Fi Protected Access. It was
introduced in 2003 as a temporary measure while the alliance finalized their specification for what
would become WPA2 introduced in 2004. WPA was designed as a short-term replacement that would
be compatible with older WEP-enabled hardware with a simple firmware update. This helped with user
adoption because it didn't require the purchase of new Wi-Fi hardware.
To address the shortcomings of WEP security, a new security protocol was introduced called TKIP or
the Temporal Key Integrity Protocol. TKIP implemented three new features that made it more secure
than WEP.
First, a more secure key derivation method was used to more securely incorporate the IV into the per
packet encryption key. Second, a sequence counter was implemented to prevent replay attacks by
rejecting out of order packets. Third, a 64-bit MIC or Message Integrity Check was introduced to
prevent forging, tampering, or corruption of packets.
TKIP still used the RC4 cipher as the underlying
encryption algorithm. But it addressed the key generation
weaknesses of WEP by using a key mixing function to
generate unique encryption keys per packet. It also
utilizes 256 bit long keys. This key mixing function
incorporates the long lived Wi-Fi passphrase with the IV.
This is different compared to the simplistic concatenation
of the shared key and IV. Under WPA, the pre-shared
key is the Wi-Fi password you share with people when
they come over and want to use your wireless network.
This is not directly used to encrypt traffic. It's used as a
factor to derive the encryption key. The passphrase is fed
into the PBKDF2 or Password-Based Key Derivation
Function 2, along with the Wi-Fi networks SSID as a salt.
This is then run through the HMAC-SHA1 function 4096
times to generate a unique encryption key. The SSID salt
is incorporated to help defend against rainbow table attacks. The 4096 rounds of HMAC-SHA1
increase the computational power required for a brute force attack.
I should call out that the pre-shared key can be entered using two different methods. A 64 character
hexadecimal value can be entered, where the 64 character value is used as the key, which is 64
hexadecimal characters times four bits, which is 256 bits.

83
The other option is to use PBKDF2 function but only if entering ASCII characters as a passphrase. If
that's the case, the passphrase can be anywhere from eight to 63 characters long.
WPA2 improved WPA security even more by implementing CCMP or Counter Mode CBC-MAC
Protocol. WPA2 is the best security for wireless networks currently available, so it's really important to
know as an I.T. Support Specialist. It's based on the AES cipher finally getting away from the insecure
RC4 cipher. The key derivation process didn't change from WPA, and the pre-shared key requirements
are the same. Counter with CBC-MAC is a particular mode of operation for block ciphers. It allows for
authenticated encryption, meaning data is kept confidential, and is authenticated. This is accomplished
using an authenticate, then encrypt mechanism.
The CBC-MAC digest is computed first. Then, the resulting
authentication code is encrypted along with the message
using a block cipher. We're using AES in this case, operating
in counter mode. This turns a block cipher into a stream
cipher by using a random seed value along with an
incrementing counter to create a key stream to encrypt data
with.
Now, let's walk through the Four-Way Handshake process
that authenticates clients to the network. I should call out,
that while you might not encounter this in your day to day work, it's good to have a grasp on how the
authentication process works. It will help you understand how WPA2 can be broken. This process also
generates the temporary encryption key that will be used to encrypt data for this client. This process is
called the Four-Way Handshake, since it's made up of four exchanges of data between the client and
AP. It's designed to allow an AP to confirm that the client has the correct pairwise master key, or pre-
shared key in a WPA-PSK setup without disclosing the PMK.The PMK is a long live key and might not
change for a long time. So an encryption key is derived from the PMK that's used for actual encryption
and decryption of traffic between a client and AP. This key is called the Pairwise Transient Key or
PTK. The PTK is generating using the PMK, AP nonce, Client nonce, AP MAC address, and Client
MAC address. They're all concatenated together, and run through a function. The AP and Client
nonces are just random bits of data generated by each party and exchanged. The MAC addresses of
each party would be known through the packet headers already, and both parties should already have
the correct PMK. With this information, the PTK can be generated. This is different for every client to
allow for confidentiality between clients. The PTK is actually made up of five individual keys, each
with their own purpose. Two keys are used for encryption and confirmation of EAPoL packets, and the
encapsulating protocol carries these messages. Two keys are used for sending and receiving message
integrity codes. And finally, there's a temporal key, which is actually used to encrypt data. The AP will
also transmit the GTK or Groupwise Transient Key. It's encrypted using the EAPoL encryption key
contained in the PTK, which is used to encrypt multicast or broadcast traffic. Since this type of traffic
must be readable by all clients connected to an AP, this GTK is shared between all clients. It's updated
and re-transmitted periodically, and when a client disassociates the AP.
That's a lot to take in, so let's recap.The four messages
exchanged in order are, the AP, which sends a nonce to the
client, the client then sends its nonce to the AP, the AP, sends
the GTK, and the Client replies with an Ack confirming
successful negotiation.

84
The WPA and WPA2 standard also introduce an 802.1x authentication to Wi-Fi networks. It's usually
called WPA2-Enterprise. The non-802.1x configurations are called either WPA2-Personal or WPA2-
PSK, since they use a pre-shared key to authenticate clients. We won't rehash 802.1x here since it
operates similarly to 802.1x on wired networks, which we covered earlier. The only thing different is
that the AP acts as the authenticator in this case. The back-end radius is still the authentication server
and the PMK is generated using components of the EAP method chosen.
While not a security feature directly, WPS or Wi-Fi protected setup is a convenience feature designed
to make it easier for clients to join a WPA-PSK protected network. You might encounter WPS in a
small IT shop that uses commercial SOHO routers. It can be useful in these smaller environments to
make it easier to join wireless clients to the wireless networks securely but there are security
implications to having enabled that you should be aware of. The Wi-Fi Alliance introduced WPS in
2006. It provides several different methods that allow our wireless client to securely join a wireless
network without having to directly enter the pre-shared key. This facilitates the use of very long and
secure passphrases without making it unnecessarily complicated. Can you imagine having to have your
less technically inclined friends and family enter a 63-character passphrase to use your Wi-Fi when
they come over? That probably wouldn't go so well. WPS simplifies this by allowing for secure
exchange of the SSID and pre-shared key. This is done after authenticating or exchanging data using
one of the four supported methods. WPS supports PIN entry authentication, NFC or USB for out of
band exchange of the network details, or push-button authentication.
You've probably seen the push-button mechanism. It's typically a small button somewhere on the home
router with two arrows pointing counter-clockwise. The push-button mechanism works by requiring a
button to be pressed on both the AP side and the client side. This requires physical proximity and a
short window of time that the client can authenticate with a button press of its own.
The NFC and USB methods just provide a different channel to transmit the details to join the network.
The PIN methods are really interesting and also where critical flaw was introduced. The PIN
authentication mechanism supports two modes. In one mode, the client generates a PIN which is then
entered into the AP, and the other mode, the AP has a PIN typically hard-coded into the firmware which
is entered into the client. It's the second mode that is vulnerable to an online brute force attack. Feel
free to dive deep into this by reading more about it in the supplementary readings. The PIN
authentication method uses PINs that are eight-digits long, but the last digit is a checksum that's
computed from the first seven digits. This makes the total number of possible PINs 10 to the seventh
power or around 10 million possibilities. But the PIN is authenticated by the AP in halves. This means
the client will send the first four digits to the AP, wait for a positive or negative response, and then send
the second half of the PIN if the first half was correct. Did you see anything wrong with this scenario?
We're actually reducing the total possible valid PINs even more and making it even easier to guess
what the correct PIN is. The first half of the PIN being four digits has about 10,000 possibilities. The
second half, only three digits because of the checksum value, has a maximum of only 1,000
possibilities. This means the correct PIN can be guessed in a maximum of 11,000 tries. It sounds like a
lot, but it really isn't. Without any rate limiting, an attacker could recover the PIN and the pre-shared
key in less than four hours. In response to this, the Wi-Fi Alliance revised the requirements for the WPS
specification, introducing a lockout period of one minute after three incorrect PIN attempts. This
increases the maximum time to guess the PIN from four hours to less than three days. That's easily in
the realm of possibility for a determined and patient attacker, but it gets worse. If your network is
compromised using this attack because the PIN is an unchanging element that's part of the AP
configuration, the attacker could just reuse the already recovered WPS PIN to get the new password.

85
This would happen even if you detected unauthorized wireless clients on your network and changed
your Wi-Fi password.
WPA2 is a really robust security protocol. It's built using best in class mechanisms to prevent attacks
and ensure the confidentiality of the data it's protecting. Even so, it's susceptible to some forms of
attack.
The four-way authentication handshake that we covered earlier is
actually susceptible to an offline brute force attack. If an attacker can
manage to capture the four-way handshake process, just four packets,
they can begin guessing the pre-shared key or PMK. They can take the
nonces and MAC addresses from the four-way handshake packets and
computing PTKs. Since the message authentication code secret keys are
included as part of the PTK, the correct PMK guess would yield a PTK
that successfully validates a MIC.
This is a brute force or dictionary-based attack, so it's dependent on the
quality of the password guesses. It does require a fair amount of computational power to calculate the
PMK from the passphrase guesses and SSID values. But the bulk of the computational requirements lie
in the PMK computation. This requires 4096 iterations of a hashing function, which can be massively
accelerated through the use of GPU-accelerated computation and cloud computing resources. Because
of the bulk of the computations involving computing the PMK, by incorporating the password guesses
with the SSIDs, it's possible to pre-compute PMKs in bulk for common SSIDs and password
combinations. This reduces the computational requirements to deriving the PTK from the unique
session elements. These pre-computed sets are referred to as rainbow tables and exactly this has been
done. Rainbow tables are available for download for the top 1000 most commonly seen SSIDs and 1
million passwords.

86
Wireless Hardening
Now that we've covered the security options available for protecting wireless networks, what do you
think the most secure option would be?
In an ideal world, we'd all be protecting our wireless networks using 802.1X with EAP-TLS. It offers
arguably the best security available, assuming proper and secure handling of the PKI aspects of it. But,
this option also requires a ton of added complexity and overhead. This is because it requires the use of
a radius server and an additional authentication back-end at a minimum. If EAP-TLS is implemented,
then all the public key infrastructure components will also be necessary. This adds even more
complexity and management overhead. Not only do you have to securely deploy PKI on the back-end
for certificate management, but a system must be in place to sign the client's certificates. You also have
to distribute them to each client that would be authenticating to the network. This is usually more
overhead than many companies are willing to take on, because of the security versus convenience
trade-off involved.
If 802.1X is too complicated for a company, the next best alternative would be WPA2 with
AES/CCMP mode. But to protect against brute force or rainbow table attacks, we should take some
steps to raise the computational bar. A long and complex passphrase that wouldn't be found in a
dictionary would increase the amount of time and resources an attacker would need to break the
passphrase. Changing the SSID to something uncommon and unique, would also make rainbow tables
attack less likely. It would require an attacker to do the computations themselves, increasing the time
and resources required to pull off an attack. When using a long and complex Wi-Fi password, you
might be tempted to use WPS to join clients to the network. But we saw earlier that this might not be a
good idea from a security perspective. In practice, you won't see WPS enabled in an enterprise
environment, because it's a consumer-oriented technology. If your company values security over
convenience, you should make sure that WPS isn't enabled on your APs. Make sure this feature is
disabled on your AP's Management Council. You might want to also verify the feature is actually
disabled using a tool like Wash, which scans and enumerates APs that have WPS enabled. This
independent verification is recommended, since some router manufacturers don't allow you to disable
it. In some cases, disabling the feature through the management console doesn't actually disable the
feature. Ready for another quiz? Don't worry, it's just a practice one, to help make sure you're getting
all this wireless info wired into your brain.

87
Quiz: Wireless Security
Question 1: What are some of the weaknesses of the WEP scheme? Check all that apply.
Its poor key generation methods, its small IV pool size & its use of the RC4 stream cipher
Correct, you nailed it! The RC4 stream cipher had a number of design flaws and weaknesses. WEP also
used a small IV value, causing frequent IV reuse. Lastly, the way that the encryption keys were
generated was insecure.

Question 2: What symmetric encryption algorithm does WPA2 use?


AES (Advanced Encryption Standard)
Correct, great work! WPA2 uses CCMP. This utilizes AES in counter mode, which turns a block cipher
into a stream cipher.

Question 3: How can you reduce the likelihood of WPS brute-force attacks? Check all that apply.
Implement lockout periods for incorrect attempts & disable WPS.
Correct! Ideally, you should disable WPS entirely if you can. If you need to use it, then you should use
a lockout period to block connection attempts after a number of incorrect ones.

Question 4: Select the most secure WiFi security configuration from below:
WPA2 enterprise
Correct, exactly right! WPA2 Enterprise would offer the highest level of security for a WiFi network. It
offers the best encryption options for protecting data from eavesdropping third parties, and does not
suffer from the manageability or authentication issues that WPA2 Personal has with a shared key
mechanism. WPA2 Enterprise used with TLS certificates for authentication is one of the best solutions
available.

88
Network Monitoring
Sniffing the Network
Now, in order to monitor what type of traffic is on your network, you need a mechanism to capture
packets from network traffic for analysis and potential logging. Packet Sniffing or Packet Capture, is
a process of intercepting network packets in their entirety for analysis. It's an invaluable tool for IT
support specialists to troubleshoot issues. There are lots of tools that make this really easy to do. Before
we dive into the details of how to use them, let's cover some basic concepts of Packet Sniffing.
By default, network interfaces and the networking software stack on an OS are going to behave like a
well-mannered interface. It’ll only be accepting and processing packets that are addressed to its specific
interface address usually identified by a MAC address. If a packet with a different destination address
is encountered, the interface will just drop the packet. But, if we wanted to capture all packets that an
interface is able to see, like when we're monitoring all network traffic on a network segment, this
behavior would be a pain for us. To override this, we can place the interface into what's called
Promiscuous Mode (A type of computer networking operational mode in which all network data
packets can be accessed and viewed by all network adapters operating in this mode). This is a special
mode for Ethernet network interfaces that basically says, "Give me all the packets." Instead of only
accepting and handling packets destined for its address, it will now accept and process any packet that
it sees. This is much more useful for network analysis or monitoring purposes. I should also call out
that admin or root privileges are needed to place an interface into promiscuous mode and to begin to
capture packets. Details for various platforms on how to get into promiscuous mode can be found in the
supplemental reading section. Many packet capture tools will handle this for you too. Another super
important thing to consider when you perform packet captures is whether you have access to the traffic
you’d like to capture and monitor.
Let's say you wanted to analyze all traffic between hosts
connected to a switch and your machine is also
connected to a port on the switch. What traffic would
you be able to see in this case? Because this is a switch,
the only traffic you'd be able to capture would be traffic
from your host or destined for your host. That's not very
useful in letting you analyze other hosts traffic.
If the packets aren't going to be sent to your interface in
the first place, Promiscuous Mode won't help you see them. But, if your machine was inserted between
the uplink port of the switch and the uplink device further upstream now you'd have access to all
packets in and out of that local network segment. Enterprise manage switches usually have a feature
called Port Mirroring, which helps with this type of scenario.
Port Mirroring, allows the switch to take all packets from a specified port, port range, or the entire
VLAN and mirror the packets to a specified switch port. This lets you gain access to all packets passing
on a switch in a more convenient and secure way.

89
There's another handy though less advanced way that you can get access to packets in a switched
network environment. You can insert a hub into the topology with the device or devices you'd like to
monitor traffic on, connected to the hub and our monitoring machine. Hubs are a quick and dirty way
of getting packets mirrored to your capture interface. They obviously have drawbacks though, like
reduced throughput and the potential for introducing collisions.
If you capture packets from a wireless network, the process is slightly different. Promiscuous Mode
applied to a wireless device would allow the wireless client to process and receive packets from the
network it's associated with, destined for other clients. But, if we wanted to capture and analyze all
wireless traffic that we're able to receive in the immediate area, we can place our wireless interface into
a mode called monitor mode.
Monitor mode, allows us to scan across channels to see all wireless traffic being sent by APs and
clients. It doesn't matter what networks they're intended for and it wouldn't require the client device to
be associated or connected to any wireless network. To capture wireless traffic, all you need is an
interface placed into monitor mode. Just like enabling promiscuous mode, this can be done with a
simple command, but usually the tools used for wireless packet captures can handle the enabling and
disabling of the mode for you. You need to be near enough to the AP and client to receive a signal, and
then you can begin capturing traffic right out of the air. There are a number of open source wireless
capture and monitoring utilities, like Aircrack-ng and Kismet.
It's important to call out that if a wireless network is encrypted, you can still capture the packets, but
you won't be able to decode the traffic payloads without knowing the password for the wireless
network.
So, now we're able to get access to some traffic we like to monitor. So, what do we do next? We need
tools to help us actually do the capture and the analysis. We'll learn more about those in the next lesson.

Wireshark and TCPDump


Tcpdump is a super popular, lightweight command-line based utility that you can use to capture and
analyze packets. Tcpdump uses the open source libpcap library. That's a very popular packet capture
library that's used in a lot of packet capture and analysis tools.
Tcpdump also supports writing packet captures to a file for later analysis, sharing, or replaying traffic.
It also supports reading packet captures back from a file. Tcpdump's default operating mode is to
provide a brief packet analysis. It converts key information from layers three and up into human
readable formats, then it prints information about each packet to standard out, or directly into your
terminal. It does things like converting the source and destination IP addresses into the dotted quad
format we're most used to and it shows the port numbers being used by the communications. Let's
quickly walk through the output of a sample tcpdump.

90
The first bit of information is fairly straightforward. It's a timestamp that represents when the packet on
this line was processed by the kernel, in local time. Next the layer three protocol is identified, in this
case, it's IPv4. After this, the connection quad is shown. This is the source address, source port,
destination address, and destination port. Next, the TCP flags and the TCP sequence number are set on
the packet, if there are any. This is followed by the ack number, TCP window size, then TCP options, if
there are any set. Finally we have payload size in bytes. Remember these from a few lessons ago, when
we covered networking? Tcpdump allows us to actually inspect these values from packets directly.
I want to call out that tcpdump, by default, will attempt to resolve host addresses to hostnames. It'll also
replace port numbers with commonly associated services that use these ports. You could override this
behaviour with a -n flag.
It's also possible to view the actual raw data that makes up the packet. This is represented as
hexadecimal digits, by using the -x flag, or capital X if you want the hex in ASCII interpretation of the
data.
Remember that packets are just collections of data, or groupings of ones and zeros. They represent
information depending on the values of this data, and where they appear in the data stream. Think back
to packet headers, and how those are structured and formatted. The view tcpdump gives us lets us see
the data that fits into the various fields that make up the headers for layers in a packet.
Wireshark is another packet capture and analysis tool that you can use, but it's way more powerful
when it comes to application and packet analysis, compared to tcpdump. It's a graphical utility that also
uses the libpcap library for capture and interpretation of packets. But it's way more extensible when it
comes to protocol and application analysis.While tcpdump can do basic analysis of some types of
traffic, like DNS queries and answers, Wireshark can do way more. Wireshark can decode encrypted
payloads if the encryption key is known. It can identify and extract data payloads from file transfers
through protocols like SMB or HTTP. Wireshark's understanding of application level protocols even
extends to its filter strings. This allows filter rules like finding HTTP requests with specific strings in
the URL, which would look like, http.request.uri matches "q=wireshark". That filter string would
locate packets in our capture that contain a URL request that has the specified string within it. In this
case it would match a query parameter from a URL searching for “Wireshark”. While this could be
done using tcpdump, it's much easier using Wireshark.
Let's take a quick look at the Wireshark interface, which is divided into thirds.
The list of packets are up top, followed by the layered representation of a selected packet from the list.
Lastly the Hex and ASCII representation of the selected packet are at the bottom. The packet list view
is color coded to distinguish between different types of traffic in the capture. The color coded is user
configurable, the defaults are green for TCP packets, light blue for UDP traffic, and dark blue for DNS
traffic. Black also highlights problematic TCP packets, like out of order, or repeated packets. Above the
packet list pane, is a display filter box, which allows complex filtration of packets to be shown. This is
different from capture filters, which follows the libpcap standard, along with tcpdump. Wireshark's
deep understanding of protocols allows filtering by protocols, along with their specific fields. Since
there are over 2,000 protocols supported by Wireshark, we won't cover them in detail. You may want to
take a look at the supplementary readings, which shows a broad range of protocols understood by
Wireshark. Not only does Wireshark have very handy protocol handling infiltration, it also understands
and can follow tcp streams or sessions. This lets you quickly reassemble and view both sides of a tcp
session, so you can easily view the full two-way exchange of information between parties.

91
Some other neat features of Wireshark is its ability to decode WPA and WEP encrypted wireless
packets, if the passphrase is known. It's also able to view Bluetooth traffic with the right hardware,
along with USB traffic, and other protocols like Zigbee. It also supports file carving, or extracting data
payloads from files transferred over unencrypted protocols, like HTTP file transfers or FTP. And it's
able to extract audio streams from unencrypted VOIP traffic, so basically, Wireshark is awesome.
You might be wondering how packet capturing analysis fits into security at this point. Like logs
analysis, traffic analysis is also an important part of network security. Traffic analysis is done using
packet captures and packet analysis. Traffic on a network is basically a flow of packets. Now being able
to capture and inspect those packets is important to understanding what type of traffic is flowing on our
networks that we'd like to protect.
Supplemental Reading for Promiscuous Mode:
Promiscuous Mode on Linux @ https://it.awroblew.biz/linux-how-to-checkenable-promiscuous-mode/
Enabling promiscuous mode on Mac OS X @ https://danielmiessler.com/blog/entering-promiscuous-
mode-os-x/
Enabling promiscuous mode on Windows @ http://lifeofageekadmin.com/how-to-manually-change-
your-nic-to-promiscuous-mode-on-windows-72008-r2/

92
Intrusion Detection/Prevention Systems
We covered Packet Capture and Analysis, which is related to our next topic, Intrusion Detection and
Prevention Systems or IDS/IPS. IDS or IPS systems operate by monitoring network traffic and
analyzing it.
As an IT support specialist, you may need to support the underlying platform that the IDS/IPS runs on.
You might also need to maintain the system itself, ensuring that rules are updated, and you may even
need to respond to alerts. So, what exactly do IDS and IPS systems do?
They look for matching behavior or characteristics that would indicate malicious traffic. The difference
between an IDS and an IPS system, is that IDS is only a detection system. It won't take action to block
or prevent an attack, when one is detected, it will only log an alert. But an IPS system can adjust
firewall rules on the fly, to block or drop the malicious traffic when it's detected.
IDS and IPS system guess an idea systems can either be host based or network based.
In the case of a Network Intrusion Detection System or NIDS, the detection system would be
deployed somewhere on a network, where it can monitor traffic for a network segment or sub net.
A host based intrusion detection system would be a software deployed on the host that monitors traffic
to and from that host only. It may also monitor system files for unauthorized changes.
NIDS systems resemble firewalls in a lot of ways
but a firewall is designed to prevent intrusions by
blocking potentially malicious traffic coming
from outside, and enforce ACLs between
networks. NIDS systems are meant to detect and
alert on potential malicious activity coming from
within the network. Plus, firewalls only have
visibility of traffic flowing between networks
they've set up to protect. They generally wouldn't
have visibility of traffic between hosts inside the
network. So, the location of the NIDS must be
considered carefully when you deploy a system. It needs to be located in the network topology, in a
way that it has access to the traffic we'd like to monitor.
A good way that you can get access to network traffic is using the port mirroring functionality found
in many enterprise switches. This allows all packets on a port, port range, or entire VLAN to be
mirrored to another port, where NIDS host would be connected. With this configuration, our NIDS
machine would be able to see all packets flowing in and out of hosts on the switch segment. This lets us
monitor host to host communications, and traffic from hosts to external networks, like the internet. The
NIDS hosts would analyzed this traffic by enabling promiscuous mode on the analysis port. This is the
network interface that's connected to the mirror port on our switch, so it can see all packets being
passed, and perform an analysis on the traffic. Since this interface is used for receiving mirrored
packets from the network we'd like to monitor, a NIDS host must have at least two network interfaces.
One is for monitoring and analysis, and a separate one is for connecting to our network for
management and administrative purposes. Some popular NID or NIP systems are Snort, Suricata, and
Bro NIDS, which you can read about more in the supplementary readings.

93
Placement of a NIP system or Network Intrusion Prevention system, would differ from a NIDS
system. This is because of a prevention system being able to take action against a suspected malicious
traffic. In order for a NIPS device to block or drop traffic from a detected threat, it must be placed in
line with the traffic being monitored. This means, that the traffic that's being monitored must pass
through the NIPS device. If it wasn't the case, the NIPS host wouldn't be able to take action on
suspected traffic.
Think of it this way, a NIDS device is a passive observer that only watches the traffic, and sends an
alert if it sees something. This is unlike a NIPS device, which not only monitors traffic, but can take
action on the traffic it's monitoring, usually by blocking or dropping the traffic.
The detection of threats or malicious traffic is usually handled through signature based detection,
similar to how antivirus software detects malware. As an IT Support Specialist, you might be in charge
of maintaining the IDS or IPS setup, which would include ensuring that rules and signatures are up to
date.
Signatures are unique
characteristics of known malicious
traffic. They might be specific
sequences of packets, or packets
with certain values encoded in the
specific header field. This allows
Intrusion Detection and Prevention
Systems from easily and quickly
recognizing known bad traffic from
sources like botnets, worms, and
other common attack vectors on the
internet.
But similar to antivirus, less
common or targeted attacks might not be detected by a signature based system, since they're might not
be signatures developed for these cases. So, it's also possible to create custom rules to match traffic that
might be considered suspicious, but not necessarily malicious. This would allow investigators to look
into the traffic in more detail to determine the badness level. If the traffic is found to be malicious, a
signature can be developed from the traffic, and incorporate it into the system.
What actually happens when a NIDS system detects something malicious? This is configurable, but
usually the NIDS system would log the detection event along with a full packet capture of the
malicious traffic. An alert would also usually be triggered to notify the investigating team to look into
that detected traffic. Depending on the severity of the event, the alert may just email a group, or create
a ticket to follow up on, or it might page someone in the middle of the night if it's determined to be a
really high severity and urgent. These alerts would usually also include reference information linking to
a known vulnerability, or some more information about the nature of the alert to help the investigator
look into the event.
Well, we covered a lot of ground on securing your networks. I hope you feel secure enough to move on.
If not, you can review any of these concepts that we've talked about. Once you've done that, it's time
for a peer review assessment, to give you some hands on experience with packet sniffing and analysis.
When you're finished, I'll see you in the next video, where we'll cover defence in depth.

94
Quiz: Network Monitoring
Question 1: What does tcpdump do? Select all that apply.
Analyzes packets and provides a textual analysis. Captures packets
Correct! Tcpdump is a popular, lightweight command line tool for capturing packets and analyzing
network traffic.

Question 2: What does wireshark do differently from tcpdump? Check all that apply.
It has a graphical interface. Also, it understands more application-level protocols.
Awesome job! tcpdump is a command line utility, while wireshark has a powerful graphical interface.
While tcpdump understands some application-layer protocols, wireshark expands on this with a much
larger complement of protocols understood.

Question 3: What factors should you consider when designing an IDS installation? Check all that apply.
Storage capacity. Also, traffic bandwidth.
Wohoo! It's important to understand the amount of traffic the IDS would be analyzing. This ensures
that the IDS system is capable of keeping up with the volume of traffic. Storage capacity is important
to consider for logs and packet capture retention reasons.

Question 4: What is the difference between an Intrusion Detection System and an Intrusion Prevention
System?
An IDS can alert on detected attack traffic, but an IPS can actively block attack traffic.
Correct, that's exactly right! An IDS only detects intrusions or attacks, while an IPS can make changes
to firewall rules to actively drop or block detected attack traffic.

Question 5: What factors would limit your ability to capture packets? Check all that apply.
Network interface not being in promiscuous or monitor mode. Also, access to the traffic in
question
Correct, you got it! If your NIC isn't in monitor or promiscuous mode, it'll only capture packets sent by
and sent to your host. In order to capture traffic, you need to be able to access the packets. So, being
connected to a switch wouldn't allow you to capture other clients' traffic.

Chapter 4 Questions and Answers


Question 1: What traffic would an implicit deny firewall rule block?
Everything not allowed
You got it! Implicit deny means that everything is blocked, unless it's explicitly allowed.

95
Question 2: The process of converting log entry fields into a standard format is called _______.
Log normalization
That's correct! Normalizing logs is the process of ensuring that all log fields are in a standardized format for analysis
and search purposes.

Question 3: A ______ can protect your network from DoS attacks.


Flood Guard
Correct! Flood guards provide protection from DoS attacks by blocking common flood attack traffic when it's
detected.

Question 4: Using different VLANs for different network devices is an example of _______.
Network Separation
Correct! Using VLANs to keep different types of devices on different networks is an example of network separation.

Question 5: How do you protect against rogue DHCP server attacks?


DHCP Snooping
Nice job! DHCP snooping prevents rogue DHCP server attacks. It does this by creating a mapping of IP addresses to
switch ports and keeping track of authoritative DHCP servers.

Question 6: What does Dynamic ARP Inspection protect against?


ARP Man-in-the-middle attacks
Great work! Dynamic ARP Inspection will watch for forged gratuitous ARP packets that don't correspond to the
known mappings of IP addresses and MAC address, and drop the fake packets.

Question 7: What kind of attack does IP Source Guard protect against?


IP Spoofing attacks
Correct, you nailed it! IP Source Guard protects against IP spoofing. It does this by dynamically generating ACLs for
each switch port, only permitting traffic for the mapped IP address for that port.
Question 8: A reverse proxy is different from a proxy because a reverse proxy provides ______.
Remote Access
Correct! A reverse proxy can be used to allow remote access into a network.

Question 9: What underlying symmetric encryption cipher does WEP use?


RC4
Correct, wwesome! WEP uses the RC4 stream cipher.

96
Question 10: What key lengths does WEP encryption support? Check all that apply.
64-bit & 128-bit
Nice! WEP supports 64-bit and 128-bit encryption keys.

Question 11: What's the recommended way to protect a WPA2 network? Check all that apply.
Use a unique SSID. Use a long, complex passphrase
That's exactly right! Because the SSID is used as a salt, it should be something unique to protect against rainbow table
attacks. A long, complex password will protect against brute-force attacks.

Question 12: If you're connected to a switch and your NIC is in promiscuous mode, what traffic would you be able to
capture? Check all that apply.
Broadcast traffic. Traffic to and from your machine
Great job! Since you're connected to a switch, you'd only see packets that are sent to your switch port, meaning traffic
to or from your machine or broadcast packets.

Question 13: What could you use to sniff traffic on a switch?


Port Mirroring
Correct, yes! Port mirroring allows you to capture traffic on a switch port transparently, by sending a copy of traffic
on the port to another port of your choosing.

Question 14: What does tcpdump do?


Performs packet capture and analysis
Right on! tcpdump captures and analyzes packets for you, interpreting the binary information contained in the packets
and converting it into a human-readable format.

Question 15: Compared to tcpdump, wireshark has a much wider range of supported _______.
Protocols
Yep! Wireshark supports a very wide range of various networking protocols.

Question 16: A Network Intrusion Detection System watches for potentially malicious traffic and __ when it detects
an attack.
Triggers alerts
Correct! A NIDS only alerts when it detects a potential attack.

Question 17: What does a Network Intrusion Prevention System do when it detects an attack?
It blocks the traffic.
Exactly! An NIPS would make adjustments to firewall rules on the fly, and drop any malicious traffic detected.

97
Introducing TCPdump (Lab)
In this lab, you’ll be introduced to tcpdump and some of its features. Tcpdump is the premier network
analysis tool for information security and networking professionals. As an IT Support Specialist,
having a solid grasp of this application is essential if you want to understand TCP/IP. Tcpdump will
help you display network traffic in a way that’s easier to analyze and troubleshoot.
What you’ll do:
• Command basics -You'll learn how to use tcpdump and what some of its flags do, as well as
interpret the output.
• Packet captures -You'll practice saving packet captures to files, and reading them back.
Using tcpdump
Now, you'll perform some tasks using tcpdump, starting with basic usage and working up to slightly
more advanced topics.
Basic Usage
We'll kick things off by introducing tcpdump and running it without any options. Head's up that
tcpdump does require root or administrator privileges in order to capture traffic, so every command
must begin with sudo. At a minimum, you must specify an interface to listen on with the -i flag. You
may want to check what the primary network interface name is using ip link. In this case, we'll be using
the interface eth0 for all the examples; this is not necessarily the interface you'd use on your own
machine, though.
To use tcpdump to start listening for any packets on the interface, enter the command below.
Head's up: This command will fill your terminal with a constant stream of text as new packets are read.
It won't stop until you press ctrl+C.
sudo tcpdump -i eth0
This will output some basic information
about packets it sees directly to standard out.
It'll continue to do this until we tell it to stop.
Press ctrl+C to stop the stream at any time.
You can see that once tcpdump exits, it
prints a summary of the capture performed,
showing the number of packets captured,
filtered, or dropped:

98
By default, tcpdump will perform some basic protocol analysis. To enable more detailed analysis, use
the -v flag to enable more verbose output. By default, tcpdump will also attempt to perform reverse
DNS lookups to resolve IP addresses to hostnames, as well as replace port numbers with commonly
associated service names. You can disable this behavior using the -n flag. It's recommended that you
use this flag to avoid generating additional traffic from the DNS lookups, and to speed up the analysis.
To try this out, enter this command:
sudo tcpdump -i eth0 -vn
You can see how the output now
provides more details for each packet:
Without the verbose flag, tcpdump only
gives us:
• the layer 3 protocol, source, and
destination addresses and ports
• TCP details, like flags,
sequence and ack numbers,
window size, and options
With the verbose flag, you also get all
the details of the IP header, like time-
to-live, IP ID number, IP options, and
IP flags.

Filtering
Let's explore tcpdump's filter language a bit next, along with the protocol analysis. Tcpdump supports a
powerful language for filtering packets, so you can capture only traffic that you care about or want to
analyze. The filter rules go at the very end of the command, after all other flags have been specified.
We'll use filtering to only capture DNS traffic to a specific DNS server. Then, we'll generate some DNS
traffic, so we can demonstrate tcpdump's ability to interpret DNS queries and responses.
Go ahead and enter the command now. It'll run until you stop it using ctrl+C like the previous
command, but you shouldn't see any output yet.
sudo tcpdump -i eth0 -vn host 8.8.8.8 and port 53
Let's analyze how this filter is constructed, and what exactly it's doing. Host 8.8.8.8 specifies that we
only want packets where the source or destination IP address matches what we specify (in this case
8.8.8.8). If we only want traffic in one direction, we could also add a direction qualifier, like dst or src
(for the destination and source IP addresses, respectively). However, leaving out the direction qualifier
will match traffic in either direction.

99
Next, the port 53 portion means we only want to see packets where the source or destination port
matches what we specify (in this case, DNS). These two filter statements are joined together with the
logical operator "and". This means that both halves of the filter statement must be true for a packet to
be captured by our filter.
Now, use SSH to open a second terminal in a new window, and run this command:
dig @8.8.8.8 A example.com
You should see this output to the
screen:
This uses the dig utility to query a
specific DNS server (in this case
8.8.8.8), asking it for the A record
for the specified domain (in this
case "example.com").

Back in the original terminal, you


should now see two captured
packets, as our filter rules should
filter out any other traffic:

The first one is the DNS query, which is our question (from the second terminal) going to the server.
Note that, in this case, the traffic is UDP. Tcpdump's analysis of the DNS query begins right after the
UDP checksum field. It starts with the DNS ID number, followed by some UDP options, then the query
type (in this case A? which means we're asking for an A record). Next is the domain name we're
interested in (example.com).

The second packet is the response from the server, which includes the same DNS ID from the original
query, followed by the original query. After this is the answer to the query, which contains the IP
address associated with the domain name.

You can stop the tcpdump session in the original terminal now by pressing ctrl+C. Make sure to leave
your second terminal window open; you'll need it again soon.
Up next, we'll explore tcpdump's ability to write packet captures to a file, then read them back from a
file.
Saving Captured Packets
In one of your terminals, run this command:
sudo tcpdump -i eth0 port 80 -w http.pcap

100
This starts a capture on our eth0 interface that filters for only HTTP traffic by specifying port 80. The
-w flag indicates that we want to write the captured packets to a file named http.pcap. Like the other
captures, this will run until you force it to stop with ctrl+C.
Once that's running, switch back to your second terminal, where you'll generate some http traffic that'll
be captured in the original terminal. Don't stop the capture you started with the previous command just
yet. (If you have, you can restart it now.)
In the second terminal window, execute this command to generate some traffic:
curl example.com
This command fetches the
html from example.com and
prints it to your screen. It
should look like the below.
(Head's up that only the first
part of the output is shown
here.)

Once that's done, close the


second terminal window and
return to the original
terminal where the capture
is running. Stop the capture
with ctrl+C. It should return
a summary of the number of
packets captured:

A binary file containing the packets we just captured, called http.pcap, will also have been created.
Don't try to print the contents of this file to the screen; since it's a binary file, it'll display as a bunch of
garbled text that you won't be able to read.

Somewhere in that file, there's information about the packets created when you pulled down the html
from example.com. We can read from this file using tcpdump now, using this command:
tcpdump -r http.pcap -nv

101
Note that we don't
need to use sudo to
read packets from a
file. Also note that
tcpdump writes full
packets to the file,
not just the text-
based analysis that it
prints to the screen
when it's operating
normally. For
example, somewhere
in the output you
should see the html
that was returned as
the body of the
original query in the
other terminal:

Conclusion

Congrats! You've successfully used tcpdump for basic network monitoring, including filtering for
specific traffic. You've also learned how to interpret the information that tcpdump outputs about a
packet, along with how to save and load summaries of the packets captured during a session.

102
Chapter 5: Defence in Depth
In the fifth chapter of this course, we're going to go more in-depth into security defence. We'll cover
ways to implement methods for system hardening, application hardening, and determine the policies
for OS security.
By the end of this module, you'll know why it's important to disable unnecessary components of a
system, learn about host-based firewalls, setup anti-malware protection, implement disk encryption,
and configure software patch management and application policies.
Learning Objectives
• Implement the appropriate methods for system hardening.
• Implement the appropriate methods for application hardening.
• Determine the appropriate policies to use for operating system security.

103
System Hardening
Intro to Defence in Depth
Defence in depth is the concept of having multiple overlapping systems of defence to protect IT
systems. This ensures some amount of redundancy for defensive measures. It also helps avoid a
catastrophic compromise in the event that a single system fails, or a vulnerability is discovered in one
system.
Think of this as having multiple lines of defence. If an attacker manages to bypass your firewall, you're
still protected by strong authentication systems within the network. This would require an attacker to
find more vulnerabilities in more systems before real damage can occur.
These next lessons will focus on bringing together the different security systems and measures we've
discussed so far into a comprehensive security design. It'll offer defence in depth from a variety of
known and unknown threats.
By the end of this course, you'll be able to implement the appropriate methods for system hardening
and application hardening. You'll also be able to determine the policies to use for operating system
security.

Disabling Unnecessary Components


Think back to the beginning of this course when we talked about attacks and vulnerabilities. The
special class of vulnerabilities we discussed called 0-day vulnerabilities are unique since they're
unknown until they're exploited in the wild. The potential for these unknown flaws is something you
should think about when looking to secure your company's systems and networks. Even though it's an
unknown risk, it can still be handled by taking measures to restrict and control access to systems. Our
end goal overall is risk reduction.
Two important terms to know when talking about security risks are attack vectors and attack
surfaces.
An attack vector is a method or mechanism by which an attacker or malware gains access to a network
or system. Some attack vectors are email attachments, network protocols or services, network
interfaces, and user input. These are different approaches or paths that an attacker could use to
compromise a system if they're able to exploit it.
An Attack Surface is the sum of all the different attack vectors in a given system. Think of this as the
combination of all possible ways an attacker could interact with our system, regardless of known
vulnerabilities. It's not possible to know of all vulnerabilities in the system. So, make sure to think of
all avenues that an outside actor could interact with our systems as a potential Attack Surface. The main
takeaway here is to keep our Attack Surfaces as small as possible. This reduces the chances of an
attacker discovering an unknown flaw and compromising our systems. There are lots of approaches
you can use as an IT support specialist to reduce Attack Surfaces. All of them boil down to simplifying
systems and services.

104
The less complex something is, the less likely there will be undetected flaws. So, make sure to
disable any extra services or protocols. If they're not totally necessary, then get them out of there. Every
additional surface that's operating represents additional Attack Surfaces, that could have an
undiscovered vulnerability. That vulnerability could be exploited and lead to compromise. This concept
also applies to access and ACLs. Only allow access when totally necessary.
So, for example, it's probably not necessary for employees to be able to access printers directly from
outside of the local network. You can just adjust firewall rules to prevent that type of access.
Another way to keep things simple is to reduce your software deployments. Instead of having five
different software solutions to accomplish five separate tasks, replace them with one unified solution, if
you can. That one solution should require less complex code, which reduces the number of potential
vulnerabilities.
You should also make sure to disable unnecessary or unused components of software and systems
deployed. By disabling features not in use, you're reducing even more tech services, even more. You're
not only reducing the number of ways an attacker can get in, but you're also minimizing the amount of
code that's active. It's important to take this approach at every level of systems and networks under
your administration. It might seem obvious to take these measures on critical networking infrastructure
and servers, but it's just as important to do this for desktop and laptop platforms that your employees
use. Lots of consumer operating systems ship a bunch of default services and software-enabled right
out of the box, that you probably won't be using in an enterprise network or environment. For
example, Telnet access for a managed switch has no business being enabled in a real-world
environment. You should disable it immediately if you find it on the device. Any vendor-specific API
access should also be disabled if you don't plan on using these services or tools. They might be
harmless especially if you set up strong firewall rules and network ACLs. This one service might
represent a fairly low risk, but why take any unnecessary risk at all?Remember, the defense in depth
concept is all about risk mitigation and implementing layers of security.
Now, let's think about the layered approach to security. What if our access control measures are
bypassed or fail, in some unforeseen way? As an IT support specialist, this is exactly what you want to
think about. How do we keep this component secure if the security systems above it have failed?

Host-Based Firwalls
We briefly mentioned host-based firewalls when we talked about network monitoring and intrusion
detection systems. Host-based firewalls are important to creating multiple layers of security. They
protect individual hosts from being compromised when they're used in untrusted and potentially
malicious environments. They also protect individual hosts from potentially compromised peers inside
a trusted network. Our network based firewall has a duty to protect our internal network by filtering
traffic in and out of it, while the host based firewall on each individual host protects that one machine.
Like our network based firewall, we'd still want to start with an implicit deny rule. Then, we'd
selectively enable specific services and ports that will be used. This let us start with a secured default
and then only permits traffic that we know and trust. You can think of this as starting with a perfectly
secure firewall configuration and then poking holes in it for the specific traffic we require. This may
look very different from your network firewall configuration since it's unlikely that your employees
would need remote SSH access to their laptops, for example.

105
Remember that to secure systems you need to minimize attack surfaces or exposure.
A host-based firewall plays a big part in reducing what's accessible to an outside attacker. It provides
flexibility while only permitting connections to selective services on a given host from specific
networks or IP ranges. This ability to restrict connections from certain origins is usually used to
implement a highly secure host to network. From there, access to critical or sensitive systems or
infrastructure is permitted. These are called Bastion hosts or networks, and are specifically hardened
and minimized to reduce what's permitted to run on them.
Bastion hosts are usually exposed to the internet
so you should pay special attention to hardening
and locking them down to reduce the chances of
compromise. But they can also be used as a sort of
gateway or access portal into more sensitive
services like core authentication servers or domain
controllers. This would let you implement more
secure authentication mechanisms and ACLs on
the Bastion hosts without making it inconvenient
for your entire company. Monitoring and logging
can be prioritized for these hosts more easily.
Typically, these hosts or networks would also have severely limited network connectivity. It's usually
just to the secure zone that they're designed to protect and not much else. Applications that are allowed
to be installed and run on these hosts would also be restricted to those that are strictly necessary, since
these machines have one specific purpose.
Part of the host base firewall rules will likely also
provide ACLs that allow access from the VPN
subnet. It's good practice to keep the network that
VPN clients connected to separate, using both
subnetting and VLANs. This gives you more
flexibility to enforce security on these VPN
clients. It also lets you build additional layers of
defenses, while a VPN host should be protected
using other means, it's still a host that's operating
in a potentially malicious environment. This host
is then initiating a remote connection into your trusted internal network. These hosts represent another
potential vector of attack and compromise. Your ability to separately monitor traffic coming and going
from them is super useful.
There's an important thing for you to consider when it comes to host based firewalls, especially for
client systems like laptops. If the users of the system have administrator rights, then they have the
ability to change firewall rules and configurations. This is something you should keep in mind and
make sure to monitor with logging. If management tools allow it, you should also prevent the disabling
of the host-based firewall. This can be done with Microsoft Windows machines when administered
using active directory as an example.

106
Logging and Auditing
A critical part of any security architecture is logging and alerting. It wouldn't do much good to have all
these defenses in place, if we have no idea if they're working or not. We need visibility into the security
systems in place to see what kind of traffic they're seeing.
We also need to have that visibility into the logs of all of our
infrastructure devices and equipment that we manage.
But it's not enough to just have logs, we also need ways to
safeguard logs and make them easy to analyze and review. If there
is a dedicated security team at your company, they would be
performing this analysis. But at a smaller company, this
responsibility would likely fall to the IT team. So, let's make sure
you're prepared with the skills you might need for incident
investigation.

Many investigative techniques can also be applied to troubleshooting. All systems and services running
on hosts will create logs of some kind, with different levels of detail. It depends on what it's logging,
and what events it's configured to log.
So, an authentication server we'd log every authentication attempt, whether it's successful or not.
A firewall would log traffic that matches rules with details like source and destination addresses, and
ports being used.
All this logged information gives us details about the traffic and activity that's happening on our
network and systems. This can be used to detect compromise or attempts to attack the system.
When there are a large number of systems located around your network, each with their own log
format, it can be challenging to make meaningful sense of all this data. This is where security
information and event management systems or SIEMS come in.
A SIEM can be thought of as a centralized log server. It has some extra
analysis features too. In the system administration and IT infrastructure course
of this program, you learn ways that centralized logging can help you
administer multiple machines at once. You can think of SIEM as a form of
centralized logging for security administration purposes. A SIEM system gets
logs from a bunch of other systems. It consolidates the logs from all different
places and places it in one centralized location. This makes handling logs a lot easier.
As an IT support specialist, an important step you'll take in logs analysis is normalization. This is the
process of taking log data in different formats and converting it into a standardized format that's
consistent with a defined log structure. As an IT support specialist, you might configure normalization
for your log sources. For example, log entries from our firewall may have a timestamp using a year,
month, and day format, while logs from our client machines may use day, month, year format. To
normalize this data, you choose one standard date format, then you define what the fields are for the log
types that need to be converted. When logs are received from these machines, the log entries are
converted into the standard that we defined, and stored by the logging server. This lets you analyze and
compare log data between different log types and systems in a much easier fashion.

107
So what type of information should you be logging? Well, that's a great question. If you log too much
info, it's difficult to analyze a data and find useful information. Plus, storage requirements for saving
logs become expensive very quickly. But if you log too little, then the information won't provide any
useful insights into your systems and network. Finding that middle ground can be difficult. It will vary
depending on the unique characteristics of the systems being monitored, and the type of activity on the
network. No matter what events are logged, all of them should have information that will help
understand what happened and reconstruct the events.
There are lots of important fields to capture in log entries like timestamp, the event or error code, the
service or application being logged, the user or system account associated with the event, and the
devices involved in the event.
Timestamps are super important to understanding when an event occurred.
Fields like source and destination addresses will tell us who was talking to whom.
For application logs, you can grab useful information from the logged in user associated with the event,
and from what client they used.
On top of the analysis assistance it provides, a centralized log server also has security benefits. By
maintaining logs on a dedicated system, it's easier to secure the system from attack. Logs are usually
targeted by attackers after a breach, so that they can cover their tracks. By having critical systems send
logs to remote logging server that's locked down, the details of a breach should still be logged. A
forensics team will be able to reconstruct the events that led to the compromise.
Once we have logging configured and the relevant events recorded on a centralized log server, what do
we do with all the data? Well, analyzing log details depends on what you're trying to achieve. Typically,
when you look at aggregated logs as an IT support specialist, you should pay attention to patterns and
connections between traffic. So, if you're seeing a large percentage of Windows hosts, all connecting to
specific address outside your network, that might be worth investigating. It could signal a malware
infection.
Once logs are centralized and standardized, you can write automated alerting based on rules. Maybe
you'll want to define an alert rule for repeated unsuccessful attempts to authenticate to a critical
authentication server. Lots of SIEMS solutions also offer handing dashboards to help analysts visualize
this data. Having data in a visual format can potentially provide more insight. You can also write some
of your own monitoring and alert systems.
Now, it doesn't matter if you're using a SIEM solution or writing your own, it can be useful to break
down things like commonly used protocols in the network, quickly see the top talkers in the network,
and review reported errors over time to reveal patterns.
Another important component to logging to keep in mind as an IT support specialist, is retention. Your
log storage needs will vary based on the amount of systems being logged, the amount of detail logs,
and the rate at which logs are created. How long you want or need to keep logs around will also really
influence the storage requirements for a log server. Some examples of logging servers and SIEMS
solutions are the open source rsylog, Splunk Enterprise Security, IBM Security Qradar, and RSA
Security analytics. You can learn more about these solutions in the supplementary readings of this
lesson.
https://github.com/rsyslog/rsyslog https://www.splunk.com/ https://www.ibm.com/security/security-
intelligence/qradar https://community.rsa.com/docs/DOC-41639

108
Anti-malware Protection
Anti malware defences are a core part of any company's security model in this day and age. So it's
important as an IT support specialist to know what's out there.
Today, the internet is full of bots, viruses, worms, and other automated attacks. Lots of unprotected
systems would be compromised in a matter of minutes if directly connected to the internet without
any safeguards or protections in place. And they need to have critical system updates.
While modern operating systems have reduced this threat vector by having basic firewalls enabled by
default, there's still a huge amount of attack traffic on the Internet. Anti malware measures play a super
important role in keeping this type of attack off your systems and helping to protect your users.
Antivirus software has been around for a really long time but some security experts question the value
it can provide to a company especially since more sophisticated malware and attacks have been spun up
in recent years. Antivirus software is signature based. This means that it has a database of signatures
that identify known malware like the unique file hash of a malicious binary or the file associated with
an infection. Or it could be that network traffic characteristics that malware uses to communicate with a
command and control server. Antivirus software will monitor and analyze things like new files being
created or being modified on the system in order to watch for any behavior that matches a known
malware signature. If it detects activity that matches the signature, depending on the signature type, it
will attempt to block the malware from harming the system. But some signatures might only be able to
detect the malware after the infection has occurred. In that case, it may attempt to quarantine the
infected files. If that's not possible, it will just log and alert the detection event. At a high level, this is
how all antivirus products work.
There are two issues with antivirus software though. The first is that they depend on antivirus
signatures distributed by the antivirus software vendor. The second is that they depend on the antivirus
vendor discovering new malware and writing new signatures for newly discovered threats. Until the
vendor is able to write new signatures and publish and disseminate them, your antivirus software can't
protect you from these emerging threats.. Antivirus, which is designed to protect systems, actually
represents an additional attack surface that attackers can exploit. You might be thinking, wait, our own
antivirus tools can be another threat to our system? What's the deal with that? Well, this is because of
the very nature of one antivirus engine must do. It takes arbitrary and potentially malicious binaries as
input and performs various operations on them. Because of this, there are a lot of complex code where
very serious bugs could exist. Exactly this kind of vulnerability was found in the Sophos Antivirus
engine back in 2012. You can read more about this event in the supplementary readings.
So, it sounds like antivirus software isn't ideal and has some pretty large drawbacks. Then why are we
still recommending it as a core piece of security design? The short answer is this. It protects against the
most common attacks out there on the internet. The really obvious stuff that still poses a threat to your
systems still needs to be defended against. Antivirus is an easy solution to provide that protection. It
doesn't matter how much you user education you instill in your employees, there will still be some
folks who will click on an e-mail that has an infected attachment. A good way to think about antivirus
in today's very noisy external threat environment is like a filter for the attack noise on the internet
today. It lets you remove the background noise and focus on the more important targeted or specific
threats. Remember, our defense in depth concept involves multiple layers of protection. Antivirus
software is just one piece of our anti malware defenses. If antivirus can't protect us from the threats we
don't know about, how do we protect against the unknown threats out there?

109
While antivirus operates on a blacklist model, checking against a list of known bad things and blocking
what gets matched, there's a class of anti malware software that does the opposite. Binary whitelisting
software operates off a white list. It's a list of known good
and trusted software and only things that are on the list are
permitted to run. Everything else is blocked. You can think
of this as applying the implicit deny ACL rule to software
execution. By default, everything is blocked. Only things
explicitly allowed to execute are able to. I should call out
that this typically only applies to executable binaries, not
arbitrary files like PDF documents or text files. This would naturally defend against any unknown
threats but at the cost of convenience. Think about how frequently you download and install new
software on your machine. Now imagine if you had to get approval before you could download and
install any new software. That would be really annoying, don't you think? Now, imagine that every
system update had to be whitelisted before it could be applied. Obviously, not trusting everything
wouldn't be very sustainable. It's for this reason that binary whitelisting software can trust software
using a couple of different mechanisms.
The first is using the unique cryptographic hash of binaries which are used to identify unique binaries.
This is used to whitelist individual executables. The other trust mechanism is a software-signing
certificate. Remember back when we discussed public key cryptography and signatures using public
and private key pairs? Software signing or code signing is the same idea but applied to software.

A software vendor can cryptographically sign binaries they distribute using a private key. The signature
can be verified at execution time by checking the signature using the public key embedded in the
certificate and verifying the trust chain of the public key. If the hash matches and the public key is
trusted, then the software can be verified that it came from someone with the software vendor's code
signing private key. Binary whitelisting systems can be configured to trust specific vendors' code
signing certificates. They permit all binary signed with that certificate to run.

110
This is helpful for automatically trusting content like system updates along with software in common
use that comes from reputable and trusted vendors. But can you guess the downside here? Each new
code signing certificate that's trusted represents an increase in attack surface. An attacker can
compromise the code signing certificate of a software vendor that your company trusts and use that to
sign malware that targets your company. That would bypass any binary whitelisting defenses in place.
Not good. This exact scenario happened back in 2013 to Bit9, a binary whitelisting software company.
Hackers managed to breach their internal network and found an unsecured virtual machine. It had a
copy of the code signing certificates private key. They stole that key and used it to sign malware that
would have been trusted by all Bit9 software installations by default.
Supplemental Reading:
Learn how long it would take for an unprotected system to be compromised by bots, viruses, and
worms in the link @ https://itknowledgeexchange.techtarget.com/security-corner/whats-your-systems-
survival-time/
If you're interested in why security experts question the value of antivirus software, check out the link
@ https://robert.ocallahan.org/2017/01/disable-your-antivirus-software-except.html
If you want to read about how the Sophos antivirus system was maliciously compromised, see the link
@ http://lock.cmpxchg8b.com/Sophail.pdf
If you want to learn how hackers bypassed the binary whitelisting defenses that were in place for a
software vendor, check out the link @ https://www.crn.com/news/security/240148192/bit9-admits-
systems-breach-stolen-code-signing-certificates.htm

Disk Encryption
We briefly discussed disk encryption earlier when we talked about encryption at a high level. Now, it's
time to dive deeper. Full-disk encryption, or FDE (Works by automatically converting data on a hard
drive into a form that cannot be understood by anyone who doesn’t have the key to “undo”the
conversation), is an important factor in a defense in-depth security model. It provides protection from
some physical forms of attack.
As an IT support specialist, you likely assist with implementing an FDE solution. If one doesn't exist
already, help with migrating between FDE solutions and troubleshoot issues with FDE systems, like
helping with forgotten passwords.
So FDE is key. Systems with their entire hard drives encrypted are resilient against data theft. They'll
prevent an attacker from stealing potentially confidential information from a hard drive that's been
stolen or lost. Without also knowing the encryption password or having access to the encryption key,
the data on the hard drive is just meaningless gibberish (image also on page 37).
This is a very important security mechanism to deploy for more mobile devices like laptops, cell
phones, and tablets. But it's also recommended for desktops and servers too. Since disk encryption not
only provides confidentiality but also integrity.
This means that an attacker with physical access to a system can't replace system files with malicious
ones or install malware. Having the disk fully encrypted protects from data theft and unauthorized
tampering even if an attacker has physical access to the disk.

111
But in order for a system to boot if it has an FDE setup, there are some critical files that must be
accessible. They need to be available before the primary disk can be unlocked and the boot process can
continue. Because of this, all FDE setups have an unencrypted partition on the disk, which holds these
critical boot files. Examples include things like the kernel and bootloader, that are critical booting the
operating system.
These files are actually vulnerable to being replaced with modified potentially malicious files by an
attacker with physical access. While it's possible to compromise a machine this way, it would take a
sophisticated and determined attacker to do it. There's also protection against this attack in the form of
the secure boot protocol, which is part of the UEFI specification. Secure boot uses public key
cryptography to secure these encrypted elements of the boot process. It does this by integrated code
signing and verification of the boot files. Initially, secure boot is configured with what's called a
platform key, which is the public key corresponding to the private key used to sign the boot files. This
platform key is written to firmware and is used at boot-time to verify the signature of the boot files.
Only files correctly signed and trusted will be allowed to execute. This way, a secure boot protects
against physical tampering with the unencrypted boot partition.
There are first-party full-disk encryption solutions from Microsoft and Apple called Bit Locker and
FileVault 2 respectively. There are also a bunch of third party and open source solutions. On Linux, the
dm-crypt package is super popular. There are also solutions from PGP, TrueCrypt, VeraCrypt, and lots
of others. Check out the supplementary readings for a detailed list of FDE tools. Just pick your poison
or antidote, I should say.
Full-disk encryption schemes rely on the secret key
for actual encryption and decryption operations.
They typically password-protect access to this key.
And in some cases, the actual encryption key is
used to derive a user key, which is then used to
encrypt the master key.

If the encryption key needs to be changed, the user key can be swapped out, without requiring a full
decryption and re-encryption of the data being protected. This would be necessary if the master
encryption key needs to be changed. Password-protecting the key works by requiring the user entering
a passphrase to unlock the encryption key. It can then be used to access the protected contents on the
disk. In many cases, this might be the same as the user account password to keep things simple and to
reduce the number of passwords to memorize.
When you implement a full-disk encryption solution at scale, it's super important to think about how to
handle cases where passwords are forgotten. This is another convenience tradeoff when using FDE. If
the passphrase is forgotten, then the contents of the disk aren't recoverable. Yikes! This is why lots of
enterprise disk encryption solutions have a key escrow functionality. Key escrow allows encryption
key to be securely stored for later retrieval by an authorized party. So, if someone forgets the
passphrase to unlock their encrypted disk for their laptop, the systems administrators are able to
retrieve the escrow key or recovery passphrase to unlock the disk. It's usually a separate key passphrase
that can unlock the disk in addition to the user to find one. This allows for recovery if a password is
forgotten. The recovery key is used to unlock the disk and boot the system fully.

112
You should compare full-disk encryption against file-based encryption. That's where only some files or
folders are encrypted and not the entire disk. This is usually implemented as home directory
encryption. It serves a slightly different purpose compared to FDE. Home directory or file-based
encryption only guarantees confidentiality and integrity of files protected by encryption. These setups
usually don't encrypt system files because there are often compromises between security and usability.
When the whole disk isn't encrypted, it's possible to remotely reboot a machine without being locked
out. If you reboot a full-disk encrypted machine, the disk unlock password must be entered before the
machine finishes booting and is reachable over the network again. So while file-based encryption is a
little more convenient, it's less protected against physical attacks. An attacker can modify or replace
core system files and compromise the machine to gain access to the encrypted data.
This is a good example of why understanding threats and the risks these threats represent is an
important part in designing a security architecture and choosing the right defenses. In our next lesson,
we'll cover application hardening. I'll see you there.
Supplement Reading:
There are first-party full-disk encryption (FDE) solutions from Microsoft and Apple, called BitLocker
and FileVault 2, respectively. https://docs.microsoft.com/en-us/windows/security/information-
protection/bitlocker/bitlocker-overview https://support.apple.com/en-us/HT204837
There are also a bunch of third-party and open-source solutions. On Linux, the dm-crypt package is
very popular. https://wiki.archlinux.org/index.php/dm-crypt
There are also offerings from PGP, TrueCrypt, VeraCrypt and a host of others.
https://www.symantec.com/products/encryption http://truecrypt.sourceforge.net/
https://www.veracrypt.fr/en/Home.html

113
Application Hardening
Software Patch Management
While some parts of software features are exposed, a lot of attacks depend on exploiting bugs in
software. This triggers obscure and unintended behavior which can lead to a compromise of the system
running the vulnerable software. These types of vulnerabilities can be fixed through software patches
and updates which correct the bugs that the attackers exploit.
As an IT support specialist, it's critical that you make sure that you install software updates and security
patches in a timely way in order to defend your company's systems and networks.
Software updates don't just improve software products by adding new features and improving
performance and stability, they also address security vulnerabilities. There are some software bugs that
are present in the core functionality of the software in question. This means, that the vulnerability can't
be mitigated by disabling the vulnerable service, not good.
An example of this was the Heartbleed vulnerability. A bug in the open source TLS library Open SSL.
This was discovered and widely publicized in April of 2014. The bug showed up in how the library
handled TLS heartbeat messages. They're special messages that allow one party in a TLS session to
signal to the other party that they’d like the session to be kept alive. This works by sending a TLS
heartbeat request message, a packet that has a text string and the length of the string. The receiving end
is supposed to reply with the same tech string and response. So, if the heartbeat requests message
contains the text "I am still alive" and the length of 15, the receiving end would reply back with the
same text, "I am still alive." But, the bug in the Open SSL library was that the replying side would
allocate memory space according to the value in the received packet. This was based on the specified
length of the string like it's defined in the packet. Not based on the actual length of the string. The value
was not verified. This meant that an attacker can send a malformed heartbeat request message with a
much larger length specified than what was allowed. The reply would contain the original text message
but would also include bits of memory from the replying system. So, an attacker could send a
malformed heartbeat request message containing the text, "I'm still alive", but with a length of 500.
Because the length value wasn't verified. This means that the response back would be, "I'm still alive",
followed by the next 485 characters in memory. So it was possible for an attacker to read up to 64
kilobytes of a target's memory. This memory was likely used before by Open SSL library, so it might
contain sensitive information regarding other TLS sessions. This bug meant that it was feasible for an
attacker to recover the private keys used to protect TLS sessions. This would allow them to decrypt
TLS protected sessions and recover details, like log in credentials.
This is a great example of a mistake in the code leading to a very high profile software vulnerability. It
could only be fixed through a software update or switching to a different TLS library entirely. While
the heartbeat functionality is enabled by default, it's possible to disable it in the Open SSL library but it
wasn't a simple argument to pass to an application. Disabling this functionality required compiling the
library with a flag that was specified to disable heartbeats. Then, you had to replace the installed
version with the custom compiled one. That's not something most people would do. This was also a
library widely used by both server applications and client applications. This means, that it may not be
possible to replace the Open SSL library with a customized version or a different library. The only way
to address the vulnerability in client software that implemented Open SSL, was to wait for a patch from
the software vendor. What a mess!

114
Here's the bad news, with software continuing to grow more complex over time, these types of bugs are
likely to become more commonplace. Attackers will be looking for exactly this type of vulnerability.
The best protection is to have a good system and policy in place for your company. The system should
be checking for, distributing and verifying software updates for software deployment. This is a complex
problem when considering a large organization with many machines to manage that run a variety of
software products. This is where management tools can help make this task more approachable for you.
Solutions like Microsoft's SCCM or Puppet Labs puppet in fact and tools allow administrators to get an
overview of what software is installed across their fleet of many systems. This lets a security team
analyze what specific software and versions are installed, to better understand the risk of vulnerable
software in the fleet. When updates are released and pushed to the fleet, these reporting tools can help
make sure that the updates have been applied. SCCM even has the ability to force install updates after a
specified deadline has passed. Patching isn't just necessary for software, but also operating systems and
firmware that run on infrastructure devices. Every device has code running on it that might have
software bugs that could lead to security vulnerabilities, from routers, switches, phones even printers.
Operating system vendors usually push security related patches pretty quickly when an issue is
discovered. They'll usually release security fixes out of cycle from typical OS upgrades to ensure a
timely fix because of the security implications. But, for embedded devices like networking equipment
or printers, this might not be typical.
Critical infrastructure devices should be approached carefully when you apply updates. There's always
the risk that a software update will introduce a new bug that may affect the functionality of a device,
or if the update process itself would go wrong and cause an outage.
I hope you can see the importance of applying software patches and firmware updates in a timely
fashion. It would be pretty embarrassing if you wind up being compromised by a vulnerability that
could have been easily fixed with a software update.

Application Policies
As you can see, application software can represent a pretty large attack surface. This is especially true
when it comes to a large fleet of systems used throughout an organization. So, it's important to have
some kind of application policies in place. These policies serve two purposes. Not only do they defined
boundaries of what applications are permitted or not, but they also help educate folks on how to use
software more securely.
We've seen the risks that software can pose because of security vulnerabilities. It makes sense to have a
policy around applying software updates in a timely way. A common recommendation or even a
requirement is to only support or require the latest version of a piece of software. From the I.T. support
perspective, this is important because software updates will often fix issues that someone may be
encountering. But from the security side of things, making sure the latest version of the software will
ensure that all security patches have been applied and the most secure version is in use. This should be
clearly called out in a policy. People tend to be pretty lazy about applying updates to software that they
use a lot. Lots of times, applying an update requires restarting the application, which can feel
inconvenient and disruptive to users.
It's generally a good idea to disallow risky classes of software by policy. Things like file sharing
software and piracy-related software tend to be closely associated with malware infections.

115
They usually don't have a business use either. Let's not even talk about the legal implications of this
type of software.
Understanding what your users need to do their jobs will help shape your approach to software
policies and guidelines. If there's a common use case for a certain type of software, it would be helpful
to select a specific software implementation and require the use of that solution. This lets to evaluate
the most secure solution and benefit from a more uniform software installation. Remember, the name of
the game is to minimize attack surfaces. Each piece of software that accomplishes the same thing
represents a different set of potential attack surfaces that could have a vulnerability lurking inside.
Helping your users accomplish tasks by recommending or supporting specific software makes for a
more secure environment. It also helps users by giving them clear solutions to accomplish tasks. If
you want to employ a binary white listing solution, it's also important to define a policy around what
type of software can be whitelisted. So it's probably unnecessary to have video games whitelisted,
unless your company is a video game studio, of course. These policies usually require some kind of
business use case or justification to avoid a lot of one off personal software requests.
Another class of software that you might want to have policies defined for are browser extensions or
add-ons. Since a lot of workflows live exclusively within the web browser now, they represent a
potential vector for malware that often gets overlooked. Extensions that require full access to web sites
visited can be risky since the extension developer has the power to modify pages visited. Some
extensions may even send user input to a remote server. This could potentially leak confidential
information. Clearly defining classifications of risky extensions and add-ons will help protect your
systems and provide guidance to your users. But, policies are usually not enough to arm users with the
information they need to make informed choices. Their decisions can impact the security of your
organization. That's where education and training comes into play which, we'll discuss in the next
module.
We went over a lot of really dense information on security in these lessons. Take time to review some
of the videos so that it really sinks in.
Okay, awesome work. Now, it's time for a project that will test what you learned about the system
hardening, then if you can believe it, you'll move on to the last lesson of the last course of this program.
Woohoo!

Application Hardening Quiz


Question 1: Why is it important to keep software up-to-date?
To address any security vulnerabilities discovered
Correct, nice work! As vulnerabilities are discovered and fixed by the software vendor, applying these
updates is super important to protect yourself against attackers.
Question 2: What are some types of software that you'd want to have an explicit application policy for?
Video games and Filesharing software
Correct! Video games and filesharing software typically don't have a use in business (though it does
depend on the nature of the business). So, it might make sense to have explicit policies dictating
whether or not this type of software is permitted on systems.

116
Defence in Depth Test
Question 1: What's the key characteristic of a defense-in-depth strategy to IT security?
Multiple overlapping layers of defense
Correct! Defense in depth involves having multiple layers of security in place, with overlapping
defenses that provide multiple points of protection.
Question 2: How are attack vectors and attack surfaces related?
An attack surface is the sum of all attack vectors.
Correct! An attack surface is the sum of all attack vectors in a system or environment.
Question 3: What does a host-based firewall protect against that a network-based one doesn't? Check
all that apply.
Protection from compromised peers and protection in untrusted networks
Correct! A host-based firewall can provide protection to systems that are mobile and may operate in
untrusted networks. It can also provide protection from compromised peers on the same network.
Question 4: Having detailed logging serves which of the following purposes? Check all that apply.
Auditing and event reconstruction
Correct! Having logs allows us to review events and audit actions taken. If an incident occurs, detailed
logs allow us to recreate the events that caused it.
Question 5: While antivirus software operates using a ______, binary whitelisting software uses a
whitelist instead.
Blacklist
Correct! Antivirus software operates using a blacklist, which blocks anything that's detected as
matching on the list. Binary whitelisting software operates using a whitelist, blocking everything by
default, unless it's on the whitelist.
Question 6: What does full-disk encryption protect against? Check all that apply.
Data tampering and Data theft.
Correct! Encrypting the entire disk prevents unauthorized access to the data in case it's lost or stolen. It
also protects against malicious tampering of the files contained on the disk.
Question 7: Securely storing a recovery or backup encryption key is referred to as _______.
Key escrow
Correct! Key escrow is the act of securely storing a backup or recovery encryption key for a full-disk-
encrypted set up.
Question 8: What does applying software patches protect against? Check all that apply.
Undiscovered vulnerabilities and newly found vulnerabilities
Correct! Software updates or patches can fix recently discovered vulnerabilities or close ones that you
weren't aware of.

117
Case Study: The Security Model of Chrome OS
The Security Principles of Chrome OS
Let's look at the Chrome OS Security Model, as an example of of a system that applies the principles of
defence in depth, that we've been studying.
This OS has lots of different security layers designed to protect against attacks. So that if one layer is
compromised the others are still in effect. As we call that on the first course Chrome OS runs Linux
under the hood and features a simple user interface. Most of the system interaction happens through the
Chrome browser.
The goal of the OS is to be a secure and simple way for the user to interact with the web. The user
doesn't have administrative privileges or permission to modify the system, since the Chrome browser is
running with user permissions managed by the administrator. This means that even if the browser is
exploited, the system can't be tampered with.
Chrome OS defaults to securing the operating system. This means that the system settings are already
configured for security and the user doesn't need to take any additional action. Chrome OS turns on
hardening options that are available to all Linux operating systems. Although, not all Linux
distributions enable these options.
Let's go over some other specific security features developed for Chrome OS. As we've said before,
applying security updates is critical for keeping a secure infrastructure. Chrome OS tackles this
problem by implementing
Automatic Updates. As soon as an update is available, the OS downloads and installs it in the
background without any user interaction.
Users will be prompted to restart when updates are available. The update then goes live as soon as the
machine is restarted.
One of the security features of Chrome OS is called Sandboxing. Each tab in a Chrome browser, as
well as each of the system services runs in a separate process completely independent of the others. So,
if the user visits a malicious website, most of damage that the website can do is limited to that single
tab. Only the session data stored on the computer hard drive such as cookies, caches, and bookmarks is
accessible to all tabs. And of course, session data created by one website is not accessible to other
websites.
Another important Chrome OS security feature is the automatic and easy-to-use Recovery mode.
When the system detects a problem either because of data corruption or an attacker tampering with the
installed operating system. It will automatically enter recovery mode and help the user return the
system to a working, reliable state.
Powerwash is another Chrome OS security feature. It allows the user to quickly reset the machine to
its factory default settings. Since all the users personal data is stored in the Cloud, this feature will
delete any downloaded files, cookies, and caches stored in the hard drive of the computer. Being able to
delete everything from local drives improves security, when travelling or entering other unsecure or
unknown environments. But that's not all, in the next two videos, we'll cover two central security
measures in Chrome OS. Verified boot, and data encryption.

118
Chrome OS Verified Boot
Chrome OS features an integrity checked rollback protected group process called Verified Boot. As
with secure boot, this process verifies the integrity of the operating system being used. If an attacker
tries to tamper with the machine, the system will detect it upon reboot and refuse to operate. Verified
boot also stops known vulnerable versions of the operating system from booting even if they're signed.
The verified boot process has four main components. Here's how it works.
The first of these components, is the read-only firmware. Chrome OS machine ship with this
firmware, which can't be written to without physically modifying the machine, so remote attackers can't
tamper with it. This read-only firmware contains the cryptographic keys necessary to verify that the
next component has been signed by a trusted source and that it has the minimum amount of code
needed to execute the next component in the chain.
The Read-write firmware. This is firmware that
can get updated automatically when necessary.
Once the read-only firmware has verified that the
contents come from a trusted source, the code from
this second piece of the firmware will be executed.
If instead the read-only firmware isn't able to verify
the contents, the system will attempt to verify the
latest backup of the read/write firmware. If the read/
write backup verifies it will use that copy instead. If
the backup doesn't verify either, it will
automatically invoke the recovery mode which is
read-only and can't be tampered with.
So, we know that if the read/write firmware is executed it's because the system has verified that it
comes from a trusted source. The read/write firmware is then in charge of verifying the actual operating
system in a similar fashion and it will verify that the kernel comes from a trusted source and has not
been tampered with. When the kernel is executed, it will verify the integrity of the root file system it's
using. The Root file system contains the actual operating system installation and the locally stored user
data.
So, if everything goes fine, the read-only firmware
verifies and executes the read/write firmware. The
read/write firmware verifies and executes the kernel
and the kernel mounts and verifies the root file
system. Chrome OS stores the kernel and the root file
system on separate hard drive partitions and there are
two for each of them. The machine is currently using
partitions that are read-only so the system that's
running can't modify itself. The other two can be modified by the software that performs the automatic
updates. When an update is downloaded, it's deployed into the partitions that aren't currently being
used. When the machine reboots, it will detect the update, verify it and boot from the updated
partitions. When it reboots, if the machine detects that one of the kernels or the file system partitions is
corrupted or was tampered with it, it will boot from the other one instead. If both are in bad shape, it
will automatically enter recovery mode.

119
Chrome OS Data
As we've called out before, Chrome OS stores most user data in the cloud, but some information is
stored locally to help performance and offline use. This information includes downloaded files,
bookmarks, cookies and caches among other things. Chrome OS ensures that only the user can access
this information by encrypting it with both the user's password and the Trusted Platform Module or
TPM.
As a reminder, the TPM is a cryptographic device that can be used by the operating system to ensure
the integrity of the system and to store cryptographic keys, so attackers can't access them. In Chrome
OS, the TPM is used only for storing cryptographic keys. In newer generations of Chrome OS, the
cryptographic device is called H1. It serves a similar purpose as the TPM, and may be used to enable
even more security functions in the future.
When a user logs in for the first time, Chrome OS uses a key generated by the cryptographic device and
the user's password to create an encrypted directory for that specific user called a vault. In order to
decrypt the directory data, that specific machine must successfully boot Chrome OS and the user must
login with their password. If an attacker tried to extract the disk and mount it onto a different machine,
they wouldn't be able to access any of the data, and since the data is encrypted with each user's
password, other people using the machine will not be able to access it. This gives users both security
and privacy.
If the Powerwash feature is activated, the key stored in the TPM are wiped, along with the deleted data,
making it impossible for an attacker to recover the information from a hard drive.
Now this doesn't mean that Chrome OS is invulnerable. All operating systems have vulnerabilities, so
we should always keep important security practices in mind. In the case of Chrome OS, we should stay
alert to avoid installing malicious Chrome extensions that can steal our passwords, for example, but
thanks to its automatic updates, sandboxing, verified boot, data encryption, Powerwash and easy
to use recovery mode, Chrome OS provides a layered security model that makes it the operating
system of choice for lots of security experts.
Wow. You've made it to the end of the module. Great work. If you can believe it, the next module is the
final module for the entire program. Woohoo!

120
Formative Quiz – Chrome OS
Question 1: Why do we say that Chrome OS follows the model of Defense in Depth?
Because there are many layers of security on top of each other.
Correct! Defense in Depth is the concept of having multiple, overlapping systems of defense to protect
IT systems. This is exactly what Chrome OS offers.

Question 2: Which of these are security features offered by Chrome OS? Check all that apply.
Sandboxing, Automatic system updates, Disk encryption.
Correct! By having the hard drive encrypted both with the cryptographic device and with the user's
password, Chrome OS ensures that the user's data can't be accessed by an attacker.

Question 3: When is the Recovery Mode started? Check all that apply.
When neither of the two copies of the read-write firmware are in a good state and When neither
of the two kernel+root partitions can be booted correctly
Correct! If the read-only firmware can't validate either of the two versions of the read-write firmware, it
will automatically start the Recovery Mode

Question 4: What data is deleted when the Powerwash feature is activated? Check all that apply
The local cache of the websites visited, Downloaded files, Cookies from the visited websites
Correct! All cookies and any other data associated with websites visited is deleted when Powerwash is
activated.

Question 5:What does the Verified Boot process ensure?


That the whole system needed for starting the OS has not been tampered with.
Correct! Verified Boot is a process that verifies the integrity of the operating system being used, from
start to finish. So that the user can trust that the contents of the OS come from a reliable source.

121
Chapter 6: Creating a Company Culture for Security
Congratulations, you've made it to the final chapter in the course! In the last chapter of this course,
we'll explore ways to create a company culture for security. It's important for any tech role to determine
appropriate measures to meet the three goals of security.
By the end of this module, you will develop a security plan for an organization to demonstrate the skills
you've learned in this course. You're almost done, keep up the great work!

Learning Objectives
• Determine appropriate measures to use to meet the 3 goals of security.
• Develop a security plan for a small-medium size organization.
• Develop a disaster recovery plan.

122
Risk in the Workplace
Security Goals
Congratulations. You've reached the last chunk of the last course of this program. You are totally ready
to lock down every single operation of your organization and make it airtight. Right? Not quite.
If you're responsible for an organization of users, there's a delicate balance between security and user
productivity. We've seen this balance in action when we dove into the different security tools and
systems together.
Before you start to design a security architecture, you need to define exactly what you like it to
accomplish. This will depend on what your company thinks is most important. It will probably have a
way it wants different data to be handled and stored.
You also need to know if your company has any legal requirements when it comes to security.
If your company handles credit card payments, then you have to follow the PCI DSS or Payment
Card Industry Data Security Standard depending on local laws. We'll take a closer look at PCI DSS
which is a great example of clearly defined security goals. PCI DSS is broken into six broad objectives,
each with some requirements.
The first objective is to build and maintain a secure network and systems. This includes the
requirements to install and maintain a firewall configuration to protect cardholder data and to not use
vendor supply defaults for system passwords and other security parameters. As you can tell, the
requirements are related to the objective. The objective is the end goal or what we'd like to achieve and
the requirements are the actions that can help achieve that goal. PCI DSS goes into more detailed
actions for each requirement. It provides more specific guidance around what a firewall configuration
should control. For example, a secure firewall configuration should restrict connections between
untrusted networks and any systems in the cardholder data environment. That's a little generic, but it
does give us some guidance on how to meet the requirements.
The second objective category is to protect cardholder data. In this objective, the first requirement is
to protect stored cardholder data. The second is to encrypt the transmission of cardholder data across
open public networks. I want to call out again how the broad objective is to protect sensitive data that's
stored in systems within our control. The requirements give us specific guidelines on how to get this
done. The specifics of these requirements help clarify some of the points like what constitutes an open
network. They also recommend using strong cryptography and offer some examples. But not all
requirements are technical in nature. Let's look at the requirement to protect stored cardholder data for
example, it has requirements for data retention policies to make sure that sensitive payment information
isn't stored beyond the time it's required. Once payment is authorized, authentication data shouldn't be
needed anymore and it should be securely deleted. This highlights the fact that good security defences
aren't just technical in nature. They are also procedural and policy-based.

123
The third objective is to maintain a vulnerability management program. The first requirement is to
protect all systems against malware and regularly update antivirus software or programs. The second is
to develop and maintain secure systems and applications. You'll find more detailed implementation
procedures within these requirements. They'll cover things like ensuring all systems have antivirus
software installed and making sure this software is kept up to date. They also require that scans are run
regularly and logs are maintained. There are also requirements for ensuring systems and software are
protected against known vulnerabilities by applying security patches at least one month from the
release of a security patch. Use of third-party security vulnerability databases is also listed to help
identify known vulnerabilities within managed systems.
The fourth objective is to implement strong access control measures. This objective has three
requirements. The first is to restrict access to cardholder data by business need-to-know. The second is
to identify and authenticate access to system components. And the third is to restrict physical access to
cardholder data. This highlights the importance of good access control measures along with good data
access policies. The first objective, restricting access to data by business need-to-know, means that any
sensitive data should be directed to data access policies to make sure that customer data isn't misused.
Part of this requirement is to enforce password authentication for system access and two factor
authentication for remote access, that's the minimum requirement. Another important piece highlighted
by the PCI DSS requirements is access control for physical access. This is a critical security aspect to
keep in mind since we need to protect systems and data from both physical theft and virtual attacks.
The fifth objective is to regularly monitor and test networks. The first requirement is to track and
monitor all access to network resources and cardholder data. The second is to regularly test security
systems and processes. The requirement for network monitoring and testing is another essential part of
a good security plan. This refers to things like setting up and configuring intrusion detection systems
and conducting vulnerability scans of the network which we’ll cover a bit more later. Testing defences
is another super important part of this. Just having the systems in place isn't enough. It's really helpful
to test defence systems regularly to make sure that they provide the protection that you want. It also
ensures that the alerting systems are functional. But don't worry, we'll dive deeper into this a little bit
later when we cover penetration testing.
The sixth and final objective is to maintain an information security policy. It only has one
requirement, to maintain a policy that addresses information security for all personnel. This
requirement addresses why we need to have well-established security policies. They help govern and
regulate user behaviour when it comes to information security aspects. It's important to call out that this
requirement mentions that the policy should be for all personnel. The responsibility of information
security isn't only on the security teams. Every member of an organization is responsible for
information security. Well-designed security policies address the most common questions or use cases
that users would have based on the specific details of the organization. Every one that uses systems on
your organization's network, is able to get around security. They might not mean to, but they can reduce
the overall security with their actions and practices. That's why having well-thought-out security
policies in place also need to be easy to find, and easy to read. We'll cover more details about user
education and getting users involved in the overall security plan in another upcoming video of this
course.

124
Measuring and Assessing
We've covered Security Risk Assessment a little bit in the last lesson. But there's lots more to talk
about. Security is all about determining risks or exposure understanding the likelihood of attacks; and
designing defences around these risks to minimize the impact of an attack.
This thought process is actually something that everyone uses in their daily life, whether they know it
or not. Think of when you cross a busy intersection, you assess the probability of being hit by an
oncoming car and the minimize that risk by choosing the right time to cross the road.
Security risk assessment starts with threat modeling. First, we identify likely threats to our systems,
then we assign them priorities that correspond to severity and probability. We do this by brainstorming
from the perspective of an outside attacker putting ourselves in a hackers shoes. It helps to start by
figuring out what high value targets an attacker may want to go after. From there, you can start to look
at possible attack vectors that could be used to gain access to high value assets.
High-value data usually includes account information, like usernames and passwords. Typically, any
kind of user data is considered high value, especially if payment processing is involved. Another part
of risk measurement is understanding what vulnerabilities are on your systems and network. One way
to find these out is to perform regular vulnerabilities scanning (Vulnerability Scanner: A computer
program designed to assess computers, computer systems, networks or applications for weaknesses).
There are lots of open source and commercial solutions that you can use. They can be configured to
perform scheduled, automated scans of designated systems or networks to look for vulnerabilities.
Then, they generate a report. Some of these tools are Nessus, OpenVas and Qualys which I've linked
to in the next reading.
Let me break down what vulnerability scanners do. Heads up, this might be a little dense, so feel free to
go over it again.
Vulnerability scanners are services that run on your system
within your control that conduct periodic scans of
configure networks. The service then conducts scans to
find and discover hosts on the network. Once hosts are
found either through a ping sweep or port scanning more
detailed scans are run against discovered hosts. Scans,
upon scans, upon scans. A port scan of either common
ports or all possible valid ports is conducted against
discovered hosts to determine what services are listening.
These services are then probed to try to discover more info
about the type of service and what version is listening on the relevant port. This information can then
be checked against databases of known vulnerabilities. If a vulnerable version of a service is
discovered, the scanner will add it to its report. Once the scan is finished the discovered vulnerabilities
and hosts are compiled in a report, that way and analysts can quickly and easily see where the problem
areas are on the network. Found vulnerabilities are prioritized according to severity, and other
categorization.
Severity takes into account a number of things, like how likely the vulnerability is to be exploited. It
also considers the type of access the vulnerability would provide to an attacker and whether or not it
can be exploited remotely or not. Vulnerabilities in the report will have links to detailed and disclosed
information about the vulnerability.

125
In some cases, it will also have recommendations on how to get rid of it. Vulnerability scanners will
detect lots of things, ranging from misconfigured services that represent potential risks, to detecting the
presence of back doors and systems. It's important to call out that vulnerability scanning can only
detect known and disclose vulnerabilities and insecure configurations. That's why it's important for you
to have an automated vulnerability scan conducted regularly.
You'll also need to keep the vulnerability database up to date, to make sure new vulnerabilities are
detected quickly. But vulnerability scanning isn't the only way to put your defences to the test.
Conducting regular penetration tests is also really encouraged to test your defences even more. These
tests will also ensure detection and alerting systems are working properly.
Penetration Testing is the practice of attempting to break into a system or network to verify the systems
in place. Think of this as playing the role of a bad guy, for educational purposes. This exercise isn't
designed to see if you have the acting chops it's intended to make you think like an attacker and use the
same tools and techniques they would use. This way, you can test your systems to make sure they
protect you like they're supposed to. The results of the penetration testing reports will also show you,
where weak points or blind spots exist. These tests help improve defences and guide future security
projects. They can be conducted by members of your in-house security team. If your internal team
doesn't have the resources for this exercise you can hire a third party company that offers penetration
testing as a service. You can even do both. That would help give you more perspectives on your
defence systems and you'll get a more comprehensive test this way.
Supplemental Reading:
For more detailed information on this video lecture, check out the following tools:
Nessus @ https://www.tenable.com/products/nessus/nessus-professional
OpenVAS @ www.opencas.org
Qualys @ https://www.qualys.com/forms/freescan/

126
Privacy Policy
When you're supporting systems that handle customer data, it's super important to protect it from
unauthorized and inappropriate access. It's not just to defend against external threats, it also protects
that data against misuse by employees. This type of behavior would fall under your company's privacy
policies.
Privacy policies oversee the access and use of sensitive data. They also define what appropriate and
authorize use is, and what provisions or restrictions are in place when it comes to how the data is used.
Keep in mind that people might not consider the security implications of their actions, so both privacy
and data access policies are important to guiding and informing people how to maintain security while
handling sensitive data. Having defined and well established privacy policies is an important part of
good privacy practices. But you also need a way to enforce these policies.
Periodic audits on cases where sensitive data was accessed can get you there. This was enabled by our
logging and monitoring systems. Auditing data access logs is super important, it helps us ensure that
sensitive data is only accessed by people who are authorized to access it, and that they use it for the
right reasons. It's good practice to apply the principle of least privilege here, by not allowing access to
this type of data by default. You should require anyone that needs access to first make an access request
with a justification for getting the data. But it can't just be vague or generic requests for access, they
should be required to specify what data they need access to. Usually, this type of request would also
have a time limit that should be called out in a request. That way, you can ensure that data access is
only permitted for legitimate business reasons which reduces the likelihood of inappropriate data
access or usage.
By logging each day the access request and actual data access, we can also correlate requests with
usage. Any access that doesn't have a corresponding request should be flagged as a high-priority
potential breach that needs to be investigated as soon as possible. Company policies act as our
guidelines in informational resources on how and how not to access and handle data. They're equally
important here. Policies will range from sensitive data handling to public communications.
Data handling policies should cover the details of how different data is classified. What makes some
data sensitive as opposed to non sensitive? What's considered confidential data? Well, once different
data classes are defined, you should create guidelines around how to handle these different types of
data. If something is considered sensitive or confidential, you’d probably have stipulations that this
data shouldn't be stored on media that's easily lost or stolen, like USB sticks or portable hard drives.
They're also commonly used without any encryption at all. Imagine if one of your employees lost an
unencrypted portable hard drive full of customer information, disaster. That's exactly the situation a
data access policy tries to avoid. It might also make sense to include laptops and mobile devices, like
phones and tablets in the removable media classification, since these devices are easily lost or stolen.
Even though they're more commonly encrypted these days, the loss and theft rate is much higher. You
may not like users storing sensitive data on a removable media, but sometimes you're out of luck. There
may be an occasion where that's the only solution to accomplish a task. If this is the case, it would help
to have recommendations on how to handle the situation in a secure way. So, you could offer an
appropriate encryption solution, and provide instructions, and support on its use.

127
Quiz: Risk in the Workplace
Question 1: What are some examples of security goals that you may have for an organization? Check
all that apply.
To protect customer data from unauthorized access and To prevent unauthorized access to
customer credentials
Correct! These are super important goals. Safeguards or systems should be implemented to help
achieve them. It's important to distinguish between a discrete goal and the mechanisms or defense
systems that help you to achieve these goals. Defenses on their own aren't goals, but they allow us to
work towards these goals.

Question 2: Which of these would you consider high-value targets for a potential attacker? Check all
that apply.
Authentication databases and Customer credit card information
Correct! Customer credit card data is really valuable to attackers, since it can be a hot commodity in the
shadier areas of the internet. The same goes for authentication databases, since this could provide
attackers with usernames and passwords that might give them access to accounts on other websites and
services.

Question 3: What's the purpose of a vulnerability scanner?


It detects vulnerabilities on your network and systems.
Correct! A vulnerability scanner will scan and evaluate hosts on your network. It does this by looking
for misconfigurations or vulnerabilities, then compiling a report with what it found.

Question 4: What are some restrictions that should apply to sensitive and confidential data? Check all
that apply.
It can be stored on encrypted media only.
Correct! Sensitive data should be treated with care so that an unauthorized third-party doesn't gain
access. Ensuring this data is encrypted is an effective way to safeguard against unauthorized access.

Question 5: What's a privacy policy designed to guard against?


Misuse or abuse of sensitive data
Correct! Privacy policies are meant to govern the access and use of sensitive data for authorized
parties.

128
Users
User Habits
You've got to involve your users when it comes to security. It's super important and might seem
obvious, but it's usually overlooked. You can build the world's best security systems, but they won't
protect you if the users are going to be practising unsafe security.
If a user writes their password on a post-it note, sticks it to their laptop, then leaves the laptop unlocked
and unattended at a cafe, you could have a disaster on your hands. But making sure that your users take
reasonable security precautions takes effort and can be really tricky. You have to make sure your users
habits and actions involve having clear and reasonable security policies. But there's more that you can
do to help ensure that your users are diligent about maintaining security.
Let's assume that your employees are acting with good intent, and that leaks and disclosures are
unintentional, and mostly due to improper handling of sensitive data. Leaks and disclosures can be
avoided by understanding what employees need to do to accomplish their jobs.
You also need to make sure that they have the right tools to get their work done without compromising
security.
If an employee needs to share a confidential file with an external partner and it's too big to e-mail, they
may want to upload it to a third-party file sharing website that they have a personal account with. This
is risky business. You should never upload confidential information onto a third-party service that
hasn't been evaluated by your company. If sharing big files with external parties is common behaviour
for your employees, it's best to find a solution that meets the needs of your users and the security
guidelines. By providing a sanctioned and approved mechanism for this file sharing activity, users are
less likely to expose the organization to unnecessary risk.
We covered password security when we discussed password authentication earlier, but there's more to
talk about when it comes to users and passwords. I hate to say it, but generally speaking, users can be
lazy about security stuff. They don't like to memorize long complicated passwords, but this is super
important to keeping your company safe. So how do we resolve this conflict? If we require 20 character
passwords that have to be changed every three months, our users will almost definitely write them
down. This compromises the security that our complex password policy is supposed to provide. It's
important to understand what threats password policies are supposed to protect against, that way, you
can try to find a better balance between security and usability. A long and complex password
requirement is designed to protect against brute force attacks, either against authentication systems or if
a hashed password database is stolen. Since direct brute force attacks against authentication
infrastructure should be easily detected and blocked by intrusion prevention systems, they can be
considered pretty low risk. But the theft of a password database would be a super serious breach. We do
have lots of additional layers of security in place to prevent a critical compromise like that from
happening in the first place. So the two attacks that complex passwords are primarily designed to
protect against, are fairly low risk. Now, we can relax the password requirements a bit and not ask for
overly long passwords. We can even adjust the mandatory password rotation time period.

129
Password reuse is another common user behaviour. People don't want a bunch of passwords to
memorize, lots of users find it easier to use the same password, for both their personal email account
and their work account. But this undermines the security of their work password. If an online service is
compromised and the password database is leaked, they're in trouble. The passwords in that database
will find their way into password files used for cracking passwords and brute force attacks. Once a
password isn't a secret, it shouldn't be used anymore. The chances of a bad actor being able to use the
password are too high. That's why it's important to make sure employees use new and unique
passwords, and don't reuse them from other services. It's also important to have a password change
system check against old passwords. This will prevent users from changing their password back to a
previously used potentially compromised password.
A much greater risk in the workplace that users should be educated on is credential theft from
phishing emails. Phishing emails are pretty effective. They take advantage of people's inclination to
open emails without looking at them too closely. If an e-mail that seems authentic actually leads to a
fake login page, users can blindly enter their credentials into the fake site and disclose their credentials
to an attacker. While having two factor authentication helps protect against this type of attack, OTP-
based two factor solutions would still provide usable credentials to an attacker, plus the attacker still
has a password which is really not good even in a two factor environment. If someone entered their
password into a phishing site or even suspects they did, it's important to change their password as
soon as possible. If you can, your organization should try to detect these types of password disclosures
using tools like password alert, which I've linked to in the next reading. This is a Chrome extension
from Google that can detect when you enter your password into a site that's not a Google page. Being
able to detect when a password is entered into a potentially untrustworthy site, lets an organization
detect potential phishing compromises. But you can also combat phishing attacks with good spam
filtering combined with good user education.
You can help influence good user behaviour by offering security training, which we'll discuss in
another video.
Next up, we'll do a quick rundown of the benefits and tradeoffs of third-party security. I'll see you
there.

130
Third-Party Security
Sometimes, you need to rely on third party solutions, or service providers, because you might not be
able to do everything in-house. This is especially true if you work as an IT support specialist in a small
shop. In some cases, you'll have to trust that third party with a lot of potentially-sensitive data or
access.So, how do you make sure that you aren't opening yourself up to a ton of unnecessary risk?
When you contract services from a third party, you're trusting them to protect your data and any
credentials involved. If they have sub par security, you're undermining your security defenses by
potentially opening a new avenue of attack. It's important to hire trustworthy and reputable vendors
whenever you can.
You also need to manage the engagements in a controlled way. This involves conducting a vendor risk
review or security assessment. In typical vendor security assessments, you ask vendors to complete a
questionnaire that covers different aspects of their security policies, procedures and defenses. The
questionnaire is designed to determine whether or not they've implemented good security designs in
their organization. For software services, or hardware vendors, you might also ask to test the software/
hardware, that way, you can evaluate it for potential security vulnerabilities or concerns before deciding
to contract their services. It's important to understand how well-protected your business partners are,
before deciding to work with them. If they have poor security practices, your organization's security
could be at risk. If you contract services from a company that will be handling data on your behalf, the
security of your data is in the hands of this third party. It's important to understand how safe your data
will be with them. Sometimes, vendors will perform tasks for you, so they'll have access to your
network and systems. In these cases, it's also important to understand how well secured third party is. A
compromise of their infrastructure could lead to a breach of your systems. While the questionnaire
model is a quick way to assess a third party, it's not ideal. It depends on self reporting of practices,
which is pretty unreliable. Without a way to verify or prove what's stated in the questionnaire, you have
to trust that the company is answering honestly. While you'd hope that a company you're doing
business with would be honest, it's best to verify. If you can, ask for a third party security assessment
report. Some of the information on the questionnaire can be verified, like third party security audit
results and penetration testing reports. In the case of third party software, you might be able to conduct
some basic vulnerability assessments and tests to ensure the product has some reasonable security.
There are lots of companies that will evaluate vendors for you for a price. But, Google recently made
their vendor security assessment questionnaires available for free. I’ve provided a link to these
questionnaires just after this video. This is a great starting point to design your own vendor security
assessment questionnaire, or you can just use these as is.
If the third party service involves the installation of any infrastructure equipment on site, pay close
attention to how they're doing it. You have to make sure this equipment's managed in a way, that doesn't
negatively affect overall security. Let's say, the vendor company requires remote access to the
infrastructure device to perform maintenance. If that's the case, then make appropriate adjustments to
firewall rules to restrict this access. That way, you'll make sure that it can't be used as an entry point
into your network. Additional monitoring would also be recommended for this third party device since
it represents a new potential attack surface in your network. If the vendor lets you, evaluate the
hardware in a lab environment first. There, you can run in-depth vulnerability assessments and
penetration testing of the hardware, and make sure there aren't any obvious vulnerabilities in the
product. Report your findings to the vendor and ask that they address any issues you discover.
Google Vendor Security Assessment Questionnaires @ https://vsaq-demo.withgoogle.com/

131
Security Training
The more trained up you and your colleagues are on security, the better. It's impossible to have good
security practices at your company if employees and users haven't received good trainings and
resources. This will boost a healthy company culture and overall attitude toward security. A working
environment that encourages people to speak up when they feel something isn't right is critical, it
encourages them to do the right thing. To help create this context, it's important for employees to have a
way that they can ask questions when they come up. This could be a mailing list where users can ask
questions about security concerns or to report things they suspect are security risks. Having the
designated communication channel where people can feel comfortable asking questions and getting
clear answers back is super important.
Helping others keep security in mind will help decrease the security burdens you'll have as an I.T.
Support Specialist. It will also make the overall security of the organization better.
Creating a culture that makes security a priority isn't easy. You have to reinforce and reward behaviours
that boost the security of your organization. Think of the small things we do every day when we use
our computers. Just entering your password to login or locking your screen when you walk away from
your computer is helpful. Hopefully, you're careful about entering your password on websites and
check the address of the site you're authenticating against. If you aren't, try it out to avoid entering your
password into a fake website. When you're working on your laptop in a public space, like a library or
coffee shop, do you lock your screen when you leave to use the restroom or get another caffeine fix? If
not, you absolutely should be! Hopefully, you weren't leaving your computer unattended in public, in
the first place. That's a really bad idea. These are the types of small things that security training should
address.
You also need to justify why these are good behaviors to adopt. In some cases, the company culture can
turn screen locking into a sort of game. When colleagues forget to lock their screen, other team
members can play harmless pranks on them. The last time I forget to lock my computer, my colleague
change the default language to Turkish. It reminded me to always lock my screen, because anyone with
access to the machine can impersonate you and get access to any resources you're logged into.
But building a culture that embraces security principles isn't always enough. There are some things that
all employees should know. This is when an occasional, mandatory security training course can help.
This could be a short video or informational presentation followed by a quiz to see if your employees
understood the key concepts covered in the training. The quiz can also increase the chances of
information being retained. Making employees retake the training every once a year or so, ensures that
everyone's up-to-date on their training. You can also cover new concepts or updated policies when
needed. This type of training should cover the most common attack types and how to avoid falling
victim to them. This includes things like phishing emails and best practices around password use. These
trainings often include scenarios that can help test the user's understanding of a particular topic.
Training courses like these are the last in the line of defences that you and your company need to have
in place to make sure that you're as safe as possible, for as long as possible.

132
Users: Quiz
Question 1: You're interested in using the services of a vendor company. How would you assess their
security capabilities? Check all that apply.
Ask them to provide any penetration testing or security assessment reports and Ask them to
complete a questionnaire
Correct! A security assessment questionnaire allows you to quickly and efficiently get a broad
understanding of what security measures a vendor company has in place. If available, any reports
detailing penetration testing results or security assessments would also be valuable.

Question 2: What's the goal of mandatory IT security training for an organization? Check all that apply.
To educate employees on how to stay secure and To build a culture that prioritizes security
Correct! IT security training for employees should be designed to educate them on how to keep
themselves and the organization secure, and to encourage a culture of security.

133
Incident Handling
Incident Reporting and Analysis
We try our best to protect our systems and networks. But it's pretty likely that some sort of incident will
happen. This could be anything from a full system compromise and data theft, to someone accidentally
leaking a memo.
Regardless of the nature of the incident, proper incident handling is important to understanding what
exactly happened, and how it happened and how to avoid it from happening again.
The very first step of handling an incident is to detect it in the first place. Hopefully, our intrusion
detection systems caught the telltale signs of an ongoing attack, and alerted us to the threat. Incidents
can be brought to your attention in other ways too. An employee may have noticed something
suspicious and reported it to the security team for investigation, or maybe they leaked information that
ended up in the news.
However you found out about the incident, the next step is to analyze it and determine the effects and
scope of damage. Was it a data leak? Or information disclosure? If so, what information got out? How
bad is it? Were systems compromised? What systems? And what level of access did they manage to
get? Is it a malware infection, what systems were infected? Some attacks are really obvious with very
clear signs of an intrusion like a defaced webpage or unusual processes consuming all resources in the
system. Others may be way more subtle and almost impossible to detect, like a small change to a single
system configuration file. This is why having good monitoring in place is so important along with
understanding your baseline. Once you figure out what normal traffic looks like on your network and
what services you expect to see, outliers will be easier to detect. This is important because every false
lead that the incident response team has to investigate means time and resource is wasted. This has the
potential to allow real intrusions to go undetected and uninvestigated longer. During detection and
scoping, correlating data from different systems can reveal a much bigger picture of what's happened. It
might show how an intruder gained access. For example, you could see a connection event logged by
the firewall from a suspicious IP address, searching for other events related to this IP address may
reveal logging attempts and the authentication logs for a system. This would provide insight into where
the attacker is coming from and what they attempted to do on the network. The authentication logs
would also indicate, whether or not they were able to successfully log into an account. If so, this lets
you know what account is compromised.

134
Once the scope of the incident is determined, the next step is containment. You need to contain the
breach to prevent further damage. For system compromises and malware infection, this is a pretty time
sensitive step. You don't want the malware or attacker to use one compromised machine to pivot to
other machines inside your network. This could broaden the incidence scope and cause even more
damage. Containment strategies will vary depending on the nature of the incident. If an account was
compromised, change the password immediately. If the owner is unable to change the password
right away, then lock the account. Also, revoke any long-live authentication tokens, since the attacker
may have one of those too. If it's a malware infection, can our anti-malware software quarantine? Or
remove the infection? If not, the infected machine needs to be removed from the network as soon as
possible to prevent lateral movement around the network.
To do this, you can adjust network-based firewall rules to
effectively quarantine the machine. You can also move the
machine to a separate VLAN used for security quarantining
purposes. This would be a VLAN with strict restrictions and
filtering applied to prevent further infection of other systems and
networks. It's important during this phase that efforts are made to
avoid the destruction of any logs or forensic evidence. Attackers
will usually try to cover their tracks by modifying logs and
deleting files, especially when they suspect they've been caught.
They'll take measures to make sure they keep their access to
compromised systems. This could involve installing a backdoor or some kind of remote access
malware. Another step to watch out for is creating a new user account that they can use to authenticate
within the future. With effective logging configurations and systems in place, these activities would
show up in audit logs. So this type of access should be detected during an incident investigation, then
actions can be taken to remove access.I hope I'm not scaring you with all these scenarios, but it's better
to be safe than sorry.
Another part of incident analysis is determining severity, impact, and recoverability of the incident.
Severity includes factors like what and how many systems were compromised and how the breach
affects business functions. An incident that's compromised a bunch of machines in the network would
be more severe than one where a single web server was hacked, for example. You can imagine that the
effort required to fix a large scale compromise would negatively affect the ability to do normal work.
So, the impact of an incident is also an important issue to consider. If the organization only had one
web server and it was compromised, it might be considered a much higher severity breach. It would
probably have a direct externally visible impact on the business.
Data exfiltration, is the unauthorized transfer of data from a computer. It's also a very important
concern when a security incident happens. Hackers may try to steal data for a number of reasons. They
may want to steal account information to provide access later. They may target business data to publish
online to cause financial loss or damage to the organization's reputation. In some cases, the attacker
may just want to cause damage and destruction, which might involve deleting or corrupting data.

135
What actions have been taken will affect the recoverability of the incident? The recoverability is how
complicated and time-consuming the recovery effort will be. An incident that can be recovered with a
simple restoration from backup by following documented procedures would be considered easily
recovered from. But, an incident where an attacker deleted large amounts of customer information and
wreaked havoc across lots of critical infrastructure systems would be way more difficult to recover
from. It might not be possible to recover from it at all. In some cases, depending on backup systems
and configurations, some data may be lost forever and can't be restored. Backups won't contain any
changes or new data that were made after the last backup run.

Incident Response and Recovery

Once a threat has been detected and contained, it has to be removed or remediated. When it comes to
malware infection, this means removing the malware from affected systems. But in some cases, this
may not be possible, so the affected systems have to be restored to a known good configuration. This
can be done by rebuilding the machine or restoring from backup. Take care when removing malware
from systems because some malware is designed to be very persistent, which means it's resistant to
being removed. But before we can start the recovery, we have to contain the incident. This might
involve shutting down affected systems to prevent further damage or spread of an infection.
On the flip side of that, affected systems may just have network access removed to cut off any
communication with the compromised system. Again, the motivating factor here would be to prevent
the spread of any infection or to remove remote access to the system. The containment strategy varies
depending on the nature of the affected system. Let's say a critical piece of networking infrastructure
was compromised. A quick shutdown may not work since it would impact other business operations.
On top of that, removing networking access might trigger fail safes in attack software or malware. Let's
say a piece of malware is designed to periodically check into a command and control server. Severing
network communications with the infected host might cause the malware to trigger a self destruct
function in an attempt to destroy evidence. Forensic analysis may need to be done to analyze the attack.
This is especially true when it comes to a malware infection.
In the case of forensic analysis, affected machines might be investigated very closely to determine
exactly what the attacker did. This is usually done by taking an image of the disk, essentially making a
virtual copy of the hard drive. This lets the investigator analyze the contents of the disk without the risk
of modifying or altering the original files. If that happened, it would compromise the integrity of any
forensic evidence. Usually, evidence gathering is also part of the incident response process. This
provides evidences to law enforcement if the organization wants to pursue legal action against the
attackers. Forensic evidence is super useful for providing details of the attack to the security
community. It allows others security teams to be aware of new threats and lets them better defend
themselves. It's also very important that you get members from your legal team involved in any
incident handling plans. Because an incident can have legal implications for the company, a lawyer
should be available to consult and advise on the legal aspects of the investigation. It's crucial in order to
avoid complications or issues of liability. Members of the public relations team should also get
involved since these incidents can have an impact on a company's reputation.

136
There is another part of the cleanup and recovery phase I should call out. We'll need to use information
from the analysis to prevent any further intrusions or infections. First, we determine the entry point to
figure out how the attacker got in, or what vulnerability the malware exploited. This needs to be done at
the same time as the cleanup. If you remove the malware infection without also addressing the
underlying vulnerability, systems could become reinfected right after you clean them up. As you
learned in the system administration and IT infrastructure services course, postmortems can be a great
way to document incidents. The learnings from postmortems can be used to prevent those incidents
from happening again.
If a critical system has been compromised, remediation can be complicated because of downtime
during remediation and recovery. Logs have to be audited to determine exactly what the attacker did
while they had access to the system. They'll also tell you what data the attacker accessed. Systems must
be scrutinized to ensure no back doors have been installed, or malware planted on the system.
Depending on the severity of the compromise or infection, it might be necessary to rebuild the system
from the ground up. Cleanup would typically involve restoring from a backup point to a known good
configuration. Infected or corrupted system files could be restored from known good copies.
Sometimes, cleanup can be very simple and quick. I hope that's what you find more often than not.
If a website was defaced, the attacker may have simply uploaded their defaced HTML file and pointed
the web server at the new file. A configuration file change and deletion of the attacker's HTML file
would undo those changes. Even so, efforts need to be made to determine how the attacker got access.
That vulnerability should be closed to prevent any future attacks.
When all traces of the attack have been removed and discovered and the known vulnerabilities have
been closed, you can move on to the last step. That's when systems need to be thoroughly tested to
make sure proper functionality has been restored. Usually, affected systems would also remain under
close watch, sometimes with additional detailed monitoring and logging enabled. This is to watch for
any additional signs of an intrusion in case something was missed during the cleanup.
It's also possible that the attacker will attempt to attack the same target again. There's a very high
chance that they use the same or similar attack methodology on other targets in your network. It's
important to incorporate the lessons you've learned from any incident into your overall security
defences.
Update firewall rules and ACLs if an exposure was discovered in the course of the investigation. Create
new definitions and rules for intrusion detections systems that can watch for the signs of the same
attack again.
Stay vigilant and prepared to protect your system from attacks. Remember that at some point, some sort
of security breach will happen. Just stay calm and execute your plan to counterattack the breach.

Quiz: Incident Handling


Question 1: What's the first step in handling an incident? Answer: Detect the incident
Correct! Before you can take any action, you have to be aware that an incident occurred in the first place.

Question 2: How do you protect against a similar incident occurring again in the future? Answer: Conduct a post-
incident analysis.
Correct! By analyzing the incident and figuring out the details of how an attacker compromised a network or system,
you can learn what vulnerabilities were exploited and take steps to close them.

137
Chapter 6: Test and Answers
Question 1: What's the first step in performing a security risk assessment?
Threat modeling
Correct! Threat modeling is the process of identifying likely threats to your systems or network, and
assigning them priorities. This is the first step to assessing your security risks.

Question 2: What tool can you use to discover vulnerabilities or dangerous misconfigurations on your
systems and network?
Vulnerability scanners
Correct! A vulnerability scanner is a tool that will scan a network and systems looking for
vulnerabilities or misconfigurations that represent a security risk.

Question 3: What characteristics are used to assess the severity of found vulnerabilities? Check all that
apply.
Remotely exploitable or not and Chance of exploitation and Type of access gained
Correct! Things to consider when evaluating a vulnerability are how likely it is to be exploited, the type
of access an attacker could get, and whether or not the vulnerability is exploitable remotely.

Question 4: Which of the following should be incorporated into a reasonably secure password policy
that balances security with usability? Check all that apply.
A password expiration time of 6-12 months and A complexity requirement of special characters
and numbers and A length of at least 8 characters and
Correct! A good balance of a strong but useable password is at least 8 characters, includes a mixture of
punctuation characters, and rotates periodically, but not too frequently.

Question 5: A strong password is a good step towards good security, but what else is recommended to
secure authentication?
2-factor authentication
Correct! Two-factor authentication, combined with a strong password, significantly increases the
security of your authentication systems.

Question 6: What risk are you exposing your organization to when you contract services from a third
party?
Trusting the third party's security
Correct! You're trusting this third party to have reasonable security in place to protect the data or access
you're entrusting them with.

138
Question 7: What's a quick and effective way of evaluating a third party's security?
A security assessment questionnaire
Correct! A security assessment questionnaire would help you understand how well-defended a third
party is, before deciding to do business with them.

Question 8: What are some behaviors you should encourage in order to build a security-conscious
culture? Check all that apply.
Checking website URLs when authenticating and Locking your screen and Asking security-
related questions
Correct! Encouraging people to lock their screens when they walk away from their computer is a very
important behavior to reinforce. The same goes for checking the address of the website they're
authenticating to, to make sure it's not a fake phishing site. It's also important that people are
comfortable asking security questions when they're unsure.

Question 9: What are the first two steps of incident handling and response? Check all that apply.
Incident detection and Incident containment
Correct! The first step is incident detection, because you need to be aware of an ongoing incident
before you can react to it. Once you've detected the incident, you can begin containment to minimize
the impact of the incident.

Question 10: Beyond restoring normal operations and data, what else should be done during the
recovery phase?
Correct the underlying root cause
Correct! Ideally, you'd figure out what caused the incident in the first place, and make changes to avoid
a similar incident from occurring in the future.

Question 11: How can events be reconstructed after an incident?


By reviewing and analyzing logs
Correct! By auditing logs, it should be possible to recreate exactly what happened before and during an
incident. This would help you understand what was done, along with the overall scope of the incident.

139

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy