CISSP End Game
CISSP End Game
CISSP End Game
Please do not use this document as primary source of preparation. This can be helpful to give a
sharp and final edge to your CISSP preparation.
My experiences during CISSP journey, various websites, tech blogs, social media discussions &
notes from multiple CISSP books used to create this document. This document is created with intent
to give a quick refresher to your preparation at one place and help the professionals aspiring for
CISSP. I have not created this to violate any author’s copyrights. Please inform me @ my email
address if anyone find copyright violation. I will review and remove the same.
BEST OF LUCK
S G -CISSP
sonu2hcl@hotmail.com
Eligibility Criteria
Don’t have enough work experience yet? There are two ways you can overcome this
obstacle.
Total 100%
Domain 1: Security and Risk Management
Finding & setting the right combination of Confidentiality, Integrity and Availability is a balancing
act.
This is really the tough job of IT Security – finding the RIGHT combination for your organization.
Too much Confidentiality and the Availability can suffer.
Too much Integrity and the Availability can suffer.
Too much Availability and both the Confidentiality and Integrity can suffer.
The opposites of the CIA Triad is DAD (Disclosure, Alteration and Destruction).
Disclosure – Someone not authorized gets access to your information.
Alteration – Your data has been changed.
Destruction – Your Data or Systems has been Destroyed or rendered inaccessible.
Security Governance Principles – goal is to maintain business processes. IT security goals support
the business goals (compliance, guidelines, etc). You shouldn’t be looking for the best technical
answer, but the answer that best supports the business.
Frameworks help avoid building IT security in a vacuum or without considering important concepts
Can be regulations, non-regulation, industry-specific, national, international
COBIT is an example. Set of best practices from ISACA. Five key principles.
Focuses on WHAT you’re trying to achieve. Also serves as a guideline for auditors:
Principle 1 – Meeting Stakeholder needs
Principle 2 – Covering the enterprise end-to-end
Principle 3 – Applying a single, integrated framework
Principle 4 – Enabling a holistic approach
Principle 5 – Separating governance from management
ITIL is the de facto standard for IT service management. How you’re trying to achieve
something
ISO 27000 series. Started as a British standard.
27005 refers to risk management
27799 refers to personal health info (PHI)
OCTAVE: Operationally Critical Threat, Assets and Vulnerability Evaluation was developed at
Carnegie Mellon University’s CERT Coordination Center. This suite of tools, methods and
techniques provides two alternative models to the original. That one was developed for organizations
with at least 300 workers. OCTAVE-S is aimed at helping companies that don’t have much in the
way of security and risk-management resources. OCTAVE-Allegro was created with a more
streamlined approach. It is self-directed risk assessments.
FAIR: Factor Analysis of Information Risk was developed to understand, analyze and measure
information risk. It also has the support of the former CISO of Nationwide Mutual Insurance, Jack
Jones. This framework has received a lot of attention because it allows organizations to carry out
risk assessments to any asset or object all with a unified language. Your IT people, those on the IRM
team and your business line staff will all be able to work with one another while using a consistent
language.
TARA: The Threat Agent Risk Assessment was created back in 2010 by Intel. It allows companies to
manage their risk by considering a large number of potential information security attacks and then
distilling them down to the likeliest threats. A predictive framework will then list these threats in terms
of priority.
ITIL: Information Technology Infrastructure Library provides best practices in IT Service
Management (ITSM). It was created with five different “Service Management Practices” to assist you
in managing your IT assets with an eye on preventing unauthorized practices and events.
That does not mean you are not liable as well; you may be, and that depends on Due Care
Due Diligence vs Due Care
Due Diligence – the thought, planning or research put into the security architecture of
your organization. This would also include developing best practices and common
protection mechanisms, and researching new systems before implementing them
Due Care – is an action. It follows the Prudent Person Rule which begs the question
“what would a prudent person do in this situation?” This includes patching systems,
fixing issues, reporting, etc in a timely fashion.
Negligence is the opposite of due care. If you did not perform due care to
ensure a control isn’t compromised, you are liable
Auditing is a form of due care
Liability - Senior leadership/Authorized personnel (If senior leadership not
available as an option) is always ultimately liable/responsible/
Important Laws/Regulations:
Health Insurance Portability and Accountability Act – States that covered entities should
disclose breaches in security pertaining to personal information. This act applies to health
insurers, health providers, and claims and processing agencies. It seeks to guard Protected
Health Info (PHI). Applies to “covered entities” (healthcare providers, health plans,
clearinghouses).
HITECH Act of 2009 makes HIPAA’s provisions apply to business associates of
covered entities (lawyers, accountants, etc)
Any PII associated with healthcare is Personal Health Info (PHI)
Gramm-Leach-Bliley Financial Modernization Act – This act covers financial agencies and
aims to increase protection of customer’s PII.
Patriot Act of 2011 – Provides a wider coverage for wiretapping and allows searching and
seizure without immediate disclosure.
Electronic Communications Privacy Act (ECPA) – Enacted in 1986, it aims to extend
government restrictions when it comes to wiretapping phone calls to cover transmissions of
electronic data.
Sarbanes-Oxley Act (SOX) – Enacted in 2006, response to ENRON scandal this helps
ensure that all publicly held companies should have their own procedures and internal
controls necessary for financial reporting. The act aims to minimize, if not eliminate,
corporate fraud.
.
There are six types of computer crimes (not necessarily defined by CFAA. Need to know the
motives, which are obvious based on the name of the attack):
1. Military/intelligence
2. Business
3. Financial
4. Terrorist
5. Grudge
6. Thrill
Electronic Communications Privacy Act (ECPA) – protected against warrantless wiretapping
Patriot Act of 2001 expanded law enforcement electronic monitoring capabilities.
Communications Assistance to Law Enforcement Act (CALEA) requires all communications
carriers to make wiretaps possible for law enforcement with an appropriate court order.
PCI DSS – not a law. Self-regulation by major vendors. Mandates controls to protect
cardholder data.
Computer Security Act – required US federal agencies to ID computers that contain sensitive
info. They must then develop security policies for each system and conduct training for
individuals involved with those systems.
California Senate Bill 1386 – one of the first US state-level laws related to breach
notification. Required organizations experiencing data breaches to inform customers
Understand legal and regulatory issues that pertain to information security in a global
context
EU Data Protection Directive – very pro privacy. Organizations must notify individuals regarding
how their data is gathered and used. Must allow an opt-out option for sharing with 3rd parties. Opt-
In is required for “most sensitive” data. Transmission of data outside of EU is not allowed unless
recipients have equal privacy protections. US does NOT meet this standard. Safe Harbor is an
optional agreement between the organization and the EU, where the organization must voluntarily
consent to data privacy principles that are consistent with this.
EU Court of Justice overturned the Safe Harbor agreement in 2015. In 2016 the EU
Commission and US Department of Commerce established the EU-US Privacy Shield, a new
legal framework for transatlantic data transmission which replaced Safe Harbor.
General Data Protection Regulation – went into effect in 2018 which replaced the above
restrictions.
Applies to all organizations worldwide that offer services to EU customers
Extends concept of PII to photos/videos/social media posts, financial transactions, location
data, browsing history, login credentials, and device identifiers
Data collection, retention and sharing must be minimized exclusively for the intended
purpose.
Data breach requires notification within 72 hours of discovery
Organizations that deal with personal data on a large scale must appoint a Data Protection
Officer to their boards
Focus of controls are on encryption and pseudonymization, which is the process of replacing
some data elements with pseudonyms and makes it more difficult to identify individuals.
Wassenaar Arrangement – export/import controls for conventional arms and dual-use goods and
technologies. Cryptography is considered “dual use” This includes countries like Iran, Iraq, China
and Russia who want to spy on their citizens, and so they don’t import overly strong cryptography
technologies. US is not included in this. Companies like Google have to make country-specific
technology because of this
Digital Millenium Copyright Act of 1998 - prohibits the circumvention of copy protection
mechanisms placed in digital media, and relieves ISP of liability for activities of users
Intellectual Property Protections:
Trademark – names, slogans and logos that identify a company/product. Cannot be
confusingly similar to any trademarks. They are good for 10 years, but can be renewed
indefinitely.
Patent – Has to be registered with the US Patent office, which is public information. Many
companies avoid this, such as 3M. A patent is good for 20 years, and is considered the
shortest of all intellectual property protections.
Copyright – Creative content such as songs, book and software code. It requires disclosure
of the product and expires 70 years after the death of the author.
Licenses – End-User License Agreement (EULA) is a good example
Trade Secrets – KFC’s special seasoning, coca-cola’s formula,, etc. Protected by NDAs
(non-disclosure agreement) and NCAs (non-compete agreements). These are the best
options for organizations that do not want to disclose any information about their products.
Trade Secrets have no expiration.
Intellectual Property Attacks:
Software piracy
Copyright infringement
Corporate espionage – when competitors attempt to ex-filtrate information from a company,
often using an employee.
Economic Espionage Act of 1996 – provides penalties for individuals found guilty of the theft of
trade secrets. Harsher penalties if the offender is attempting to aid foreign governments.
Typo squatting. Examples :
Digital Rights Management is any solution that allows content owners to enforce any of the above
restrictions on music, movies, etc
Understanding SOC 1, SOC 2, and SOC 3 reports:
SOC 1 reports address a company's internal control over financial reporting, which pertains to the
application of checks-and-limits. By its very definition, as mandated by SSAE 18, SOC 1 is the audit
of a third-party vendor’s accounting and financial controls. It is the metric of how well they keep up
their books of accounts. There are two types of SOC 1 reports — SOC 1 Type I and SOC 1 Type II.
Type I pertains to the audit taken place on a particular point of time, that is, a specific single date.
While a Type II report is more rigorous and is based on the testing of controls over a duration of
time. Type II reports’ metrics are always judged as more reliable as they pertain to the effectiveness
of controls over a more extended period of time.
SOC 2 is the most sought-after report in this domain and a must if you are dealing with an IT vendor.
It is quite common for people to believe that SOC 2 is some upgrade over the SOC 1, which is
entirely untrue. SOC 2 deals with the examination of the controls of a service organization over, one
or more of the ensuing Trust Service Criteria (TSC):
Privacy
Confidentiality
Processing Integrity
Availability
Security
SOC 2 is built around the definition of a consistent set of parameters around the IT services which a
third party provides to you. If you require to have a metric of a vendor’s providence of private,
confidential, available, and secure IT services - then, you need to ask for an independently audited
and assessed SOC 2 report. Like SOC 1, SOC 2 too has two types - SOC 2 Type I and SOC 2 Type
II.
Type I confirms that the controls exist. While Type II affirms that not just the controls are in place, but
they actually work as well. Of course, SOC 2 Type II is a better representation of how well the
vendor is doing for the protection and management of your data. But, the serviced party here has to
be very clear about this that the SOC 2 Type II report is to be audited by an independent CPA.
SOC 3 is not some kind of upgrade over the SOC 2 report. It may have some of the components of
SOC 2; still, it is entirely a different ball game. SOC 3 is a summarized report of the SOC 2 Type 2
report. So, yes, it is not as detailed as SOC 2 Type I report, or SOC 2 Type II reports are, but a SOC
3 report is designated to be a less technical and detailed audit report with a seal of approval which
could be put up on the website of the vendor.
Because it is less detailed and less technical, it might not contain the same level of vital intricacies of
the business auditing which you might require.
A business must request and analyze the SOC reports from your prospective vendors. It is an
invaluable piece of information to make sure that adequate controls are put in place and the controls
actually work in an effective manner.
Not just this, SOC reports — be it SOC 1, SOC 2, or SOC 3 — come very helpful in ensuring that
your compliance with the regulatory expectations is up to the mark
Professional Ethics – The CISSP has a code of ethics that is to be followed by certified information
security professionals. Those who intentionally or knowingly violate the code of ethics will face action
from a peer review panel, which, could result in the revocation and nullification of the certification.
The safety and welfare of society and the common good, duty to our principals, and to each
other, requires that we adhere, and be seen to adhere, to the highest ethical standards of
behavior.
Therefore, strict adherence to this Code is a condition of certification.
1. Protect society, the common good, necessary public trust and confidence, and the
infrastructure.
2. Act honorably, honestly, justly, responsibly, and legally.
3. Provide diligent and competent service to principals.
4. Advance and protect the profession.
Exam Tip: Always be familiar with the order of the canons which are applied in order. Priority of the
ethics is in same order it is written, E.g. 1st canon is topmost priority and last canon is least priority if
there is any conflict in the given question on exam. If there is any ethical dilemma, you must follow
the order. Protecting society is more important than protecting the profession, for example
Develop, document, and implement security policy, standards, procedures and guidelines
Security Policies - These are the highest level and are mandatory. Everything about an
organization’s security posture will be based around this. Specifies auditing/compliance
requirements and acceptable risk levels. Used as proof that senior management has exercised due
care. Mandatory that it is followed.
Wouldn’t use terms like Linux or Windows. That’s too low level. It would refer to these things
as “systems”
Driven by business objectives and convey the amount of risk senior management is willing to
accept.
Easily accessible and understood by the intended audience.
Should be reviewed yearly or after major business changes, and dated with version number
Very non-technical
Standards – mandatory actions and rules. Describes specific technologies.
Used to indicate expected user behavior. For example, a consistent company email
signature.
Might specify what hardware and software solutions are available and supported.
Compulsory and must be enforced to be effective. (This also applies to policies!)
Baselines – represents a minimum level of security. Often refer to industry standards like TCSEC
and NIST
Somewhat discretionary in the sense that security can be greater than your baseline, but
doesn’t need to be.
Guidelines – Simply guide the implementation of the security policy and standard. These are
NOT MANDOTARY in nature.
Are more general vs. Specific rules.
Provide flexibility for unforeseen circumstances.
Should NOT be confused with formal policy statements.
Procedures – very detailed step-by-step instructions. These are mandatory. Specific to system and
software. Must be updated as hardware and software evolves.
Often act as the “cookbook” for staff to consult to accomplish a repeatable process.
Detailed enough and yet not too difficult that only a small group (or a single person) will
understand.
Installing operating systems, performing a system backup, granting access rights to a system
and setting up new user accounts are all example of procedures
Organizational plans:
Strategic Plan – Long term (5 years). Goals and visions for the future. Risk assessments fall
under this
Tactical Plan – useful for about a year. Projects, hiring, budget etc.
Operational Plan – short term (month or quarter). Highly detailed, more step-by step.
Steps for Business Impact Analysis:
1. Identify the company’s critical business functions.
2. Identify the resources these functions depend upon.
3. Select individuals to interview for data gathering.
4. Create data-gathering techniques (surveys, questionnaires, qualitative and quantitative
approaches).
5. Calculate how long these functions can survive without these resources.
6. Identify vulnerabilities and threats to these functions.
7. Calculate the risk for each different business function.
8. Document findings and prepare BIA report.
Various Plans of BCP:
Qualitative Analysis: does not attempt to assign numeric values at all, but rather is scenario
oriented Uses more guess work, approximate values or measurements, such as HIGH/MED/LOW
Based more on softer metrics such as opinions, rather than numbers and historical
data
Qualitative techniques and methods include brainstorming, focus groups, checklists,
Delphi Technique, etc.
Delphi Technique – The Delphi Technique is a method used to estimate the likelihood and outcome
of future events. A group of experts exchange views, and each independently gives estimates and
assumptions to a facilitator who reviews the data and issues a summary report. The group members
discuss and review the summary report, and give updated forecasts to the facilitator, who again
reviews the material and issues a second report. This process continues until all participants reach a
consensus. The experts at each round have a full record of what forecasts other experts have made,
but they do not know who made which forecast. Anonymity allows the experts to express their
opinions freely, encourages openness and avoids admitting errors by revising earlier forecasts
anonymous surveys. Issuer can’t see identity of who made statements
Safeguard Evaluation:
ALE (before safeguard) – ALE (after implementing safeguard) – annual cost of
safeguard = value to the company
The value should not be negative. If it is, the cost of protecting an asset is more than
the asset itself.
Threat Modeling is - where potential threats are identified, categorized and analyzed. Includes
frameworks like STRIDE and DREAD
Microsoft STRIDE Threat Categorization: Spoofing, Tampering, Repudiation, Information
Disclosure, Denial of Service, Elevation of privilege
DREAD
Damage – how bad is the attack?
Reproducibility – how easy to reproduce attack?
Exploitability – how hard to launch attack?
Affected users – how many people would be affected?
Discoverability – how easy to find the threat?
Each risk factor for a given threat can be given a score (for example, 1 to 10). The sum of all
the factors divided by the number of factors represents the overall level of risk for the threat. A
higher score signifies a higher level of risk and would typically be given a higher priority when
determining which threats should be focused on first.
Types of attackers: Hacker - Black, White, and Gray
Outsiders – make up 48-62% of attackers. Means 28-52% of attackers are internal, either maliciously
or unintentionally. Insider threats are always greater threats as compare to outside threats.
Script Kiddies – little or no coding knowledge, but have knowledge or access to hacking tools.
Just as dangerous as skilled hackers.
Hacktivists – socially motivated, political motivation. Include organizations like Anonymous
who are known for DDoS-ing Visa, Mastercard and PayPal to protest the arrest of Julian
Assange. Often aim to assure free speech, etc
State-Sponsored Hackers – often see attacks occurring during normal work hours. Essentially
a day job. 120 countries have been using internet as a weapon. Include attacks like Sony (N.
Korea), Stuxnet (US/Israel), etc
Bots/Botnets
Network Attacks
Supply Chain – this refers to the flow of assets or data. Audits, surveys, reviews, and testing can be
done in the supply chain, but the CBK says that it’s also acceptable to simply view the resulting
reports of those reviews for entities within the supply chain, and recommend enhanced or reduced
security to those entities.
For example, if your business contracts with IBM for custom computer parts, and there is an
intermediary company that delivers those parts especially for you, they may be subject to certain
types of audits or reviews. By reviewing their findings, you can discuss additional or more effective
approaches to security.
CVE is the SCAP component that provides a naming system that describes security vulnerabilities.
SCAP is a National Institute of Standards and Technology (NIST) standard protocol that provides
common sets of criteria for defining and assessing security vulnerabilities. Thus SCAP can be used
to ensure that correct information flows between organizations and between automated processes.
SCAP version 1.0 includes all of the following components:
l CVE, which defines the naming system for describing vulnerabilities
l CVSS, which defines a standard scoring system for vulnerability severity
l CCE, which defines a naming system for system configuration problems
l CPE, which defines an operating system, application, and device naming system
l XCCDF, which defines a language format for security checklists
l OVAL, which defines a language format for security testing procedures
Applications that conform to SCAP can therefore scan and score systems according to a standard
set of criteria. In addition, these applications can communicate with one another in a standard
fashion.
Domain 2: Asset Security
Identify and classify information and assets
Classifying Data
Labels – objects have labels assigned to them. Examples include Top Secret, Secret,
Unclassified etc, but are often much more granular. Sensitive data should be marked with a
label.
Clearance - assigned to subjects. Determination of a subject’s trustworthiness
Declassification is required once data no longer warrants the protection of its sensitivity level
Military Classifications:
Top Secret – Classified. Grave damage to national security
Secret – Classified. Serious damage to national security
Confidential – Classified. Damage to national security.
Sensitive (but unclassified)
Unclassified
Private Sector Classifications
Confidential
Private – confidential regarding PII
Sensitive
Public
Creating Data Classification Procedures
Protect Privacy
Data remanence is the residual physical representation of data that has been in some way erased.
After storage media is erased there may be some physical characteristics that allow data to be
reconstructed. Data remanence plays major role when storage media is erased for the purposes of
reuse or release. It also refers to data remaining on storage after imperfect attempts to erase it
Happens on magnetic drives, flash drives and SSDs
RAM is volatile and is lost if device is turned off. ROM is not.
Cold Boot Attacks freeze RAM to make it last after powering down for 30 minutes or so
Flash memory is written by sectors, not byte-by-byte. Thumb drives and SSDs are examples.
Blocks are virtual, compared to HDD’s physical blocks. Bad blocks are silently replaced by SSD
controller, and empty blocks are erased by a “garbage collection” process. Erase commands on an
SSD can’t be verified for successful completion, and makes no attempt to clean bad blocks. The
only way to verify there are no data remnants is to destroy the drive. Alternatively, you can encrypt
the drive before it is ever used.
Read-Only Memory is nonvolatile and can’t be written to by end user
Users can write to PROM only once
EPROM/UVEPROM can be erased through the use of ultraviolet light
EEPROM chips may be erased with electrical current
Data Destruction Methods:
Erasing – Overwriting the data. This is not a preferred method.
Sanitizing – the removal of storage media. Useful for when you are selling the hardware
Purging – most effective method if you can’t or won’t physically destroy drive.
Destroying – the most effective method of data destruction
Degaussing – Uses magnetic fields to wipe data. Only works against mechanical drives, not,
not SSDs
Crypto-Shredding – Crypto-shredding is the deliberate destruction of all encryption keys for
the data; effectively destroying the data until the encryption protocol used is (theoretically,
some day) broken or capable of being brute-forced. This is sufficient for nearly every use
case in a private enterprise, but shouldn’t be considered acceptable for highly sensitive
government data. Encryption tools must have this as a specific feature to absolutely ensure
that the keys are unrecoverable. Crypto-shredding is an effective technique for the cloud
since it ensures that any data in archival storage that’s outside your physical control is also
destroyed once you make the keys unavailable. If all data is encrypted with a single key, to
crypto-shred you’ll need to rotate the key for active storage, then shred the “old” key, which
will render archived data inaccessible.
We don’t mean to oversimplify this option – if your cloud provider can’t rotate your keys or
ensure key deletion, crypto-shredding isn’t realistic. If you manage your own keys, it should
be an important part of your strategy. It is very effective in all the options if done correctly.
This is the only reliable option for erasing data from cloud provider space and always best
option if it is given as an option in exam.
Drive Encryption – protects data at rest, even after a physical breach. Recommended for mobile
devices/media. Whole disk encryption is preferred over file encryption. Breach notification laws
exclude lost encrypted data. Backup data should be stored offsite. Informal processes, such as
storing media at an employee’s house, should be avoided.
Controls – countermeasures put into place to mitigate risk. Combined to produce defense in depth.
Several categories
Administrative – policies, procedures, regulations
Technical – software, hardware or firmware
Physical – locks, security guards, etc.
Physical security affects all other aspects of an organization
Alarm types include:
Deterrent
Repellent
Notification
There are no preventive alarms because they always trigger in response to something after
the fact
Centralized Alarms alert remote monitoring stations
Locks are the most common and inexpensive type of physical controls
Lighting is the most common physical control
The Kernel is the heart of the operating system, which usually runs in Ring 0. It provides an
interface between hardware and the rest of the OS.
Models:
Ring Model – Rings are arranged in a hierarchy from most privileged (most trusted, usually
numbered zero) to least privileged (least trusted, usually with the highest ring number). On
most operating systems, Ring 0 is the level with the most privileges and interacts most
directly with the physical hardware such as the CPU and memory. separates users
(untrusted) from the kernel (trusted).
Hypervisor Mode – Closely related to virtual memory are virtual machines, such asVMWare,
VirtualBox, and VirtualPC. VMWare and VirtualPC are the two leading contenders in this
category. A virtual machine enables the user to run a second OS within a virtual host. For
example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS
that runs on x86 processor and supports standard BIOS booting. Virtual systems make use
of a hypervisor to manage the virtualized hardware resources to the guest operating system.
A Type 1 hypervisor runs directly on the hardware with VM resources provided by the
hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the
hardware. Virtual machines are a huge trend and can be used for development and system
administration, production, and to reduce the number of physical devices needed. The
hypervisor is also being used to design virtual switches, routers, and firewalls. Also called
Ring -1 (“minus 1”). Allows virtual guests to operate in ring 0.
Open Systems are hardware and software that use public standards.
A Closed System uses proprietary standards.
The CPU is the heart of the computer system. The CPU consists of the following:
An arithmetic logic unit (ALU) that performs arithmetic and logical operations
A control unit that extracts instructions from memory and decodes and executes the
requested instructions
Memory, used to hold instructions and data to be processed
Two basic designs of CPUs are manufactured for modern computer systems:
1. Reduced Instruction Set Computing (RISC)—Uses simple instructions that require a reduced
number of clock cycles.
2. Complex Instruction Set Computing (CISC)—Performs multiple operations for a single
instruction.
CPUs fetch machine language code and executes them. Below four steps take one clock cycle to
complete.
1. Fetch Instructions
2. Decode instructions
3. Execute instructions
4. Write (save) results
Memory Protection
One process cannot affect another. They’ll be sharing the same hardware, but memory allotted
for one process should not be able to manipulate the memory allotted for another.
Virtual memory – provides isolation and allows swapping of pages in and out of RAM.
Swapping – moves entire processes from primary memory (RAM) from or to secondary
memory (disk)
Paging – copies a block from primary memory from or to secondary memory.
WORM Storage - “write once, read many”. Ensures integrity as data cannot be altered after first
write. Examples include CD-R and DVD-R
Two important security concepts associated with storage are protected memory and memory
addressing. For the exam, you should understand that protected memory prevents other
programs or processes from gaining access or modifying the contents of address space that’s
has previously been assigned to another active program. Memory can be addressed either
physically or logically. Memory addressing describes the method used by the CPU to access the
contents of memory.
Secure Hardware
Trusted Platform Module – hardware crypto-processor. Generates, stores and limits the use of
crrypto keys. Literally a chip on a motherboard. Commonly used to ensure boot integrity.
Data Execution Prevention – areas of RAM marked Non-Executable (NX bit). Prevents simple
buffer overflows.
Address Space Layout Randomization – each process is randomly located in RAM. Makes it
difficult for attacker to find code that has been injected.
Reference Monitor – mediates access between subjects and objects. Enforces security policy.
Cannot be bypassed.
Trusted Computing Base – combination of HW, SW and controls that work together to form a
trusted base that enforces your security policies. TCB is generally the OS Kernel, but can include
things like configuration files.
Security Perimeter – separates the TCB from the rest of the system. The TCB communicates
through secure channels called trusted paths
These privileges are spread across different subjects, so it almost acts as Separation of
duties.
Graham-Denning Model – defines a set of basic rights in terms of commands that a subject can
execute on an object. Three parts: objects, subjects and rules.
Rules:
Transfer Access
Grant Access
Delete Access
Read Object
Create Object
Destroy Object
Create Subject
Destroy Subject
Harrison-Rizzo-Ullman Model – like Graham-Denning, but treats subjects and objects as the same
and only has 6 rules:
Create object
Create subject
Destroy subject
Enter right into access matrix
Delete right from access matrix
Non-Interference Model – ensures that commands and activities at one level are not visible to other
levels. For example, prior to the Gulf War, the Pentagon ordered a huge amount of pizza and
people were able to assume something was going on. The war started shortly after.
Access Control Matrix – describes the rights of every subject for every object in the system. An
access matrix is like an excel spreadsheet. The rows are the rights of each subject (called a
capability list), and the columns show the ACL for each object or application
Zachman Framework for Enterprise Architecture – takes the Five W’s (and How), and maps them
to specific subjects or roles.
Dedicated – system contains objects of only one classification level. All subjects are cleared for that
level or higher. All subjects have access approval and need to know for all info stored/processed on
that system
System High – system contains objects of mixed labels (confidential, secret, top secret). All users
have appropriate clearances and access permissions for all info processed by a system, but they
don’t necessarily need to know all the info processed by that system. Provides the most granular
control over resources compared to the other security models.
Compartmented – All subjects accessing the system have necessary clearance, but do not have
formal access approval or need to know for ALL info on the system. Technical controls enforce
need to know for access.
Multilevel – also known as label-based. Allows you to classify objects and users with security
labels. A “reference monitor” controls access. If a top-secret subject attempts to access a top-
secret object, access is granted. By the reference monitor.
Assess and mitigate the vulnerabilities of security architectures, designs and solution
elements
Cloud Computing
IoT
Mitigating IoT Vulnerabilities:
Keep IoT devices on another network
Turn off network functionality when not needed
Apply security patches whenever possible
Protecting mobile devices:
Encrypt devices
Enable screen locks and GPS for remote wipe
Databases
Polyinstantiation - Two rows may have the same primary key, but different data for each clearance
level. Top secret clearance subjects see all data. Secret clearance subjects see only the data they
are cleared for. This prevents unprivileged users from assuming data based on what they notice is
missing.
Data mining – searching a large database for useful info. Can be good or bad. For example, a
credit card company may mine transaction records to find suspicious transactions and detect fraud.
Data Analytics is understanding normal use cases to help detect insider threats or other
compromises.
Data Diddling – altering existing data, usually as it’s being entered.
Assess and mitigate vulnerabilities in web-based systems
Fire suppression
Not all fire is equal. There are four types of fire and each requires a certain type of suppression
device. Use the wrong one, and you could end up making the fire bigger/stronger rather than
extinguishing it. The table below summarizes the classes of fire and the appropriate suppression
method.
Wood products,
Common
A paper, and Water, foam
combustible
laminates
Electrical
C Electrical equipment and Gas, CO₂, dry powders
wires
Combustible Magnesium,
D Dry powder
metals sodium, potassium
Foams are mainly water-based and are designed to float on top of a fire and prevent any oxygen
from flowing to it. Gas, such as halon or FM-200, mixes with the fire to extinguish it and is not
harmful to computer equipment. Halon has been found to damage the atmosphere, however, and is
no longer produced. CO₂ gas removes the oxygen from the air to suppress the fire. It is important
that if CO₂ is used there be an adequate warning time before disbursement. Because it removes the
oxygen from the air it could potentially endanger people's lives. CO₂ is often used in unmanned
facilities for this reason. Dry powders include sodium or potassium bicarbonate, calcium carbonate,
or monoammonium phosphate. The first three interrupt the combustion of a fire, and
monoammonium phosphate melts at a low temperature and smothers the fire.
Water sprinkler systems are much simpler to install than any of the above systems but can cause
damage to computer and electrical systems. It's important that an organization recognize which
areas require which types of suppression systems. If water is to be used in an environment that
contains electrical components, it's important that the electricity be shut off first. Systems can be
configured to shut off all electrical equipment before water is released. There are four main types of
water systems:
Wet pipe - Wet pipe systems always contain water within the pipe and are usually controlled by a
temperature sensor. Disadvantages of these systems include that the water in the pipe may freeze,
and damage to the pipe or nozzle could result in extensive water damage.
Dry pipe - Dry pipe systems employ a reservoir to hold the water before deployment, leaving the
pipes empty. When a temperature sensor detects a fire, water will be released to fill the pipes. This
type of system is best for cold climates where freezing temperatures are an issue.
Preaction - Preaction systems operate similarly to dry pipe systems but add an extra step. When
empty, the pipes are filled with pressurized air. If pressure is lost, water will fill the pipes, but not be
dispersed. There is a thermal-fusible link on the nozzle that must first melt away before the water
can be released. The advantage of this system is it gives people more time to react to false alarms
and small fires. It is much more effective to put out a small fire with a hand-held extinguisher than a
full spray system.
Deluge - Deluge systems have their sprinkler heads turned all the way open so that greater amounts
of water can be released at once. These types of systems are not usually used in data centers
because of this.
OWASP Top 10
Assess and mitigate vulnerabilities in mobile systems
Mobile systems include the operating systems and applications on smartphones, tablets, phablets,
smart watches, and wearables. The most popular operating system platforms for mobile systems are
Apple iOS, Android, and Windows 10.
The vulnerabilities that are found on mobile systems include
Lack of robust resource access controls. History has shown us that some mobile OSs lack
robust controls that govern which apps are permitted to access resources on the mobile
device, including:
Locally stored data
Contact list
Camera roll
Email messages
Location services
Camera
Microphone
Insufficient security screening of applications. Some mobile platform environments are quite
good at screening out applications that contain security flaws or outright break the rules, but
other platforms have more of an “anything goes” policy, apparently. The result is buyer
beware: Your mobile app may be doing more than advertised.
Security settings defaults too lax. Many mobile platforms lack enforcement of basic security
and, for example, don’t require devices to automatically lock or have lock codes.
In a managed corporate environment, the use of a mobile device management (MDM) system can
mitigate many or all of these risks. For individual users, mitigation is up to individual users to do the
right thing and use strong security settings.
Assess and mitigate vulnerabilities in embedded devices:
Embedded devices encompass the wide variety of systems and devices that are Internet connected.
Mainly, we’re talking about devices that are not human connected in the computing sense. Examples
of such devices include
Automobiles and other vehicles.
Home appliances, such as clothes washers and dryers, ranges and ovens, refrigerators,
thermostats, televisions, video games, video surveillance systems, and home automation
systems.
Medical care devices, such as IV infusion pumps and patient monitoring.
Heating, ventilation, and air conditioning (HVAC) systems.
Commercial video surveillance and key card systems.
Automated payment kiosks, fuel pumps, and automated teller machines (ATMs).
Network devices such as routers, switches, modems, firewalls, and so on.
These devices often run embedded systems, which are specialized operating systems designed to
run on devices lacking computer-like human interaction through a keyboard or display. They still
have an operating system that is very similar to that found on endpoints like laptops and mobile
devices.
Some of the design defects in this class of device include:
Lack of a security patching mechanism. Most of these devices utterly lack any means for
remediating security defects that are found after manufacture.
Lack of anti-malware mechanisms. Most of these devices have no built-in defenses at all.
They’re completely defenseless against attack by an intruder.
Lack of robust authentication. Many of these devices have simple, easily-guessed default
login credentials that cannot be changed (or, at best, are rarely changed by their owners).
Lack of monitoring capabilities. Many of these devices lack any means for sending security
and event alerts.
Because the majority of these devices cannot be altered, mitigation of these defects typically
involves isolation of these devices on separate, heavily guarded networks that have tools in place to
detect and block attacks
Apply cryptography
Cryptography is the science of encrypting information. The work function/factor of a cryptosystem is
the measure of its strength in terms of cost and time to decrpt messages. The work function is
generally rated by how long it takes to brute-force the cryptosystem.
Encryption is the act of rendering data unintelligible to unauthorized subjects
Algorithms and keys work together to encrypt and decrypt information. In the above example, 13 is
the key and ROT is the algorithm. In modern cryptography, algorithms don’t change often but keys
should every time
Confusion – relationship between plaintext and ciphertext should be as random as possible
Substitution replaces one character for another
Permutation provides confusion by rearranging the characters of the plaintext (such as ROT 13)
Substitution and permutation are often combined.
Monoalphabetic ciphers use one alphabet, meaning letter E is always substituted with the same
letter every time. These ciphers are susceptible to frequency analysis
Polyalphabetic ciphers use multiple alphabets.
Cryptosystem Development Concepts:
Algorithms should be available for review, however the key should always be secret. This is to
avoid security by obscurity
Kirchhoff’s Principal – Idea that algorithm should be available for public review
Exclusive-OR (XOR) Operation - Logical, binary operation which adds two bits together. Plaintext is
XORed with a random keystream to generate ciphertext
If values are same, result is 0
If values are different, result is 1
Zero-proof Knowledge is the concept that you can prove your knowledge of a fact to a third party
without revealing the fact itself. This is the case with digital signatures and certificates. This is
illustrated by the “Magic Door”, where user A uses a secret password to open a door to get to user
B without having to tell user B the password:
Cryptography History:
Caesar Cipher – used simple substitution where characters where shifted 3 spaces
Scytale – used by Spartans. Wrapped tape around a rod. The message was on the tape and
the key was the diameter of the rod.
Vigenere Cipher – polyalphabetic cipher. Alphabet is repeated 26 times to form a matrix
(Vigenere Square).
One-Time Pad – if done correctly, it’s mathematically unbreakable. Sender and recipient must
have a pad with pages full of random letters. Each page is used only once. The only way to
break it is to steal or copy the pad. They key must be at least as long as the message to be
encrypted.
Key distribution is burdensome. Very hard to do correctly in terms of getting random numbers
Symmetric Encryption
The equation for determining how many keys are required for symmetric communications is:
n*(n-1)
----------
2
Uses the same key both to encrypt and decrypt a message. This makes it very fast compared to
asymmetric encryption. May be referred to as “secret key” or “shared key” cryptography.
A session key is a temporary symmetric key used for a connection
Problems:
Difficult to securely exchange the keys
Not scalable. The number of keys you need grows exponentially as you add additional people
to the communication link
Stream vs Block Ciphers - In a stream cipher, data is encrypted bit-by-bit. Usually
implemented in hardware and requires no memory. Not used anymore because it’s likely for
patterns to emerge. Block ciphers group communications into blocks and encrypt those all
together. This alleviates the issue of patterns in stream ciphers because an attacker would
never know if a block is one word or ten words.
Block ciphers are usually implemented in software and requires a lot of memory.
Uses substitution and transposition (rearranges order of plaintext symbols) ciphers
Initialization Vector (IV) – a random value added to the plaintext before encryption. Used to ensure
that two identical plaintext messages don’t encrypt to the same ciphertext
Chaining – uses the result of one block to “seed”, or add to, the next block
Common Symmetric Algorithms:
DES – Uses a single 56-bit key. Brute forced two decades ago. No longer safe to use.
3DES – Three rounds of DES encryption using two or three different 56-bit keys. Key length is
112 bits or more depending on how many keys you use. Considered secure, but slower to
compute than AES
Two keys are required at a minimum to achieve strong security
Modes of DES
Electronic Code Book (ECB) – Each block encrypted independently. Decrypting starts at
beginning of ciphertext. Processes 64-bits at a time using the same key. A given message will
always produce the same ciphertext. Creates patterns and is susceptible to frequency analysis.
This is the weakest DES mode.
Cipher Block Chaining (CBC) – most common mode. Uses an IV to seed first block. The
ciphertext is then XORed. Problematic because errors in earlier blocks will propagate
throughout the rest of the encrypted data.
Cipher Feedback (CFB) -
Output Feedback (OFB) -
Counter Mode (CTR) – Works in parallel mode. Fixes the issue of propagating errors
Visual mnemonics for DES Modes:
AES – Three key lengths: 128, 192 and 256-bits. Current US recommended standard. Open
algorithm, patent free (anyone can use it)
RC4 – Key length of 8-2048 bits. Was used in SSL and WEP communication
Blowfish – Developed by Bruce Schneider. key sizes 32-448 bit. Faster than AES.
Two Fish - Developed by Bruce Schneider. key size 128-256 bit. Better than Blowfish. Uses a
process known as prewhitening to XOR plaintext with a separate subkey before encryption
IDEA – block algorithm using 128-bit key and 64-bit block size. Patented in many countries.
“PGP is a good IDEA” - PGP is often used in conjunction with IDEA
Be able to look at a list of algorithms and pick out the symmetric ones. Think of the word FISHES.
A fish is symmetric. Anything with FISH or ES in the name is symmetric, as are RC4, Skipjack, and
IDEA.
Asymmetric Encryption
The equation for determining how many keys are required for asymmetric communications is:
2n
Designed to solve the key exchange problem of symmetric encryption. Each user has a public and
a private key. The public is made available, but the private key is kept secret
Far slower than symmetric encryption and is weaker per key bit. 512-bit public key is roughly
equivalent to a 64-bit symmetric key.
One-Way Functions
must be a way to calculate a public key from a private key, but impossible to deduce the
private key from the public key.
Works by multiplying prime numbers. Calculating the product of those two prime numbers, but
determining which prime numbers achieved the product is much more difficult.
Asymmetric Algorithms
Diffie-Hellman – original asymmetric algorithm. Allowed two parties to agree on a symmetric
key via a public channel. Based on “difficulty of calculating discrete logarithms in a finite
field”.
RSA – based on factoring large prime numbers. Key length and block size of 512-4096. 100
times slower than symmetric encryption. Only requires two keys for any given
communication and is ideal for large environments with a low amount of time required for key
management
DSA – used for digital signatures
El-Gamal – extension of Diffie-Hellman that depends on modular arithmetic
Biggest disadvantage is that it doubles the length of messages
Elliptic Curve Cryptography (ECC) – faster than other asymmetric algorithms, so its used on
devices with less computing power, such as mobile devices. It is patented so it costs money
to use it. 256-bit ECC key is as strong as a 3,072-bit RSA key
Elleptic Curve Diffie-Hellman Ephemeral (ECDHE) is associated with providing perfect forward
secrecy, a feature where communications cannot be broken even if the server is compromised.
PGP – originally used IDEA. Can now use PKI. Uses a web of trust model to authenticate
digital
Hybrid
SSL/TLS – client requests a secure connection to a server. Server sends the client its public key in
the form of a certificate. The client takes the public key and generates a one-time session key
(temporary symmetric key) and encrypt it with the server public key and send it back to the server.
The server uses their private key to decrypt the client’s private key. That symmetric key is now
used to encrypt all data exchanged between client and server.
Hashing
Considered one way because there is no way to reverse a hash. Plaintext is “hashed” into a fixed-
length value (ie not variable), called a message digest or hash
Hashing helps ensure integrity. If the content of a file changes, its hash will change.
Hash Functions:MD5 is not used because it has been found to create collisions in the past. There
is more data in the world than there are possible 128-bit combinations
HMAC uses a secret key in combination with a hash algorithm to verify that a hash has not
tampered with it. It computes the hash of a message plus a secret key
Digital Signatures
When a digital signature is issued by a CA, they will include the recipient’s public key
If an outside individual receives someone else’s digital cert, they will verify its authenticity
by looking at the CA’s public key
Provide authentication and integrity. Do not provide confidentiality.
It will calculate the hash of a document it and encrypt it with your private key. Anyone can verify it
with your public key.
Digital Signature Standard uses SHA-1, 2 and 3 message digest functions along with DSA, RSA
and Elliptic Curve algorithms.
Digital Watermarks – encode data into a file. May be hidden using steganography.
As electronic signatures provide a receipt of the transaction in order to ensure that the entities that
participated in the transaction cannot repudiate their commitments.
The digital signature is used to achieve integrity, authenticity and non-repudiation. In a digital
signature, the sender's private key is used to encrypt the message digest of the message.
Encrypting the message digest is the act of Signing the message. The receiver will use the matching
public key of the sender to decrypt the Digital Signature using the sender's public key.
A digital signature (not to be confused with a digital certificate) is an electronic signature that can be
used to authenticate the identity of the sender of a message or the signer of a document, and
possibly to ensure that the original content of the message or document that has been sent is
unchanged. Digital signatures cannot be forged by someone else who does not possess the private
key, it can also be automatically time-stamped. The ability to ensure that the original signed
message arrived means that the sender cannot easily repudiate it later.
A digital signature can be used with any kind of message, whether it is encrypted or not, simply so
that the receiver can be sure of the sender's identity and that the message arrived intact. A digital
certificate contains the digital signature of the certificate-issuing authority so that anyone can verify
that the certificate is real and has not been modified since the day it was issued.
Assume you were going to send the draft of a contract to your lawyer in another town. You want to
give your lawyer the assurance that it was unchanged from what you sent and that it is really from
you.
1. You copy-and-paste the contract (it's a short one!) into an e-mail note.
2. Using special software, you obtain a message hash (mathematical summary) of the contract.
3. You then use a private key that you have previously obtained from a public-private key
authority to encrypt the hash.
4. The encrypted hash becomes your digital signature of the message. (Note that it will be
different each time you send a message.)
1. To make sure it's intact and from you, your lawyer makes a hash of the received message.
2. Your lawyer then uses your public key to decrypt the message hash or summary.
3. If the hashes match, the received message is valid.
Below are some common reasons for applying a digital signature to communications:
Authentication
Although messages may often include information about the entity sending a message, that
information may not be accurate. Digital signatures can be used to authenticate the source of
messages. The importance of high assurance in the sender authenticity is especially obvious in a
financial context. For example, suppose a bank's branch office sends instructions to the central
office requesting a change in the balance of an account. If the central office is not convinced that
such a message is truly sent from an authorized source, acting on such a request could be a serious
mistake.
Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the
message has not been altered during transmission. Although encryption hides the contents of a
message, it may be possible to change an encrypted message without understanding it. (Some
encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a
message is digitally signed, any change in the message after the signature has been applied would
invalidates the signature. Furthermore, there is no efficient way to modify a message and its
signature to produce a new message with a valid signature, because this is still considered to be
computationally infeasible by most cryptographic hash functions (see collision resistance).
Non-repudiation
Note that authentication, non-repudiation, and other properties rely on the secret key not having
been revoked prior to its usage. Public revocation of a key-pair is a required ability, else leaked
secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation
status requires an "online " check, e.g. checking a "Certificate Revocation List " or via the "Online
Certificate Status Protocol ". This is analogous to a vendor who receives credit-cards first checking
online with the credit-card issuer to find if a given card has been reported lost or stolen.
Digital Signature does not provide confidentiality. It provides only authenticity and integrity and non-
repudiation. The sender's private key is used to encrypt the message digest to calculate the digital
signature
Encryption provides only confidentiality. The receiver's public key or symmetric key is used for
encryption
Please remember :
Brute force – tries every possible key. In theory it will always work with time, except against one-
time pads. If a key is long enough, however, it will take incredibly long amounts of time.
Rainbow Tables – pre-computed tables of passwords and their hashes. Not practical for modern.
Effective against Windows LANMAN hashes
Salts are random values added to the end of a password before hashing it to help protect
against rainbow table attacks. Salts are stored in the same database as the hashed password.
Salting can be accomplished by PBKDF2, bcrypt and scrypt. Unique salts should be generated
for each user.
Peppers are large constant numbers used to further increase the security of the hashed
password, and are stored OUTSIDE the database that houses the hashed passwords.
Known Plaintext
Chosen Ciphertext
Chosen Plaintext
Collisions – when two different messages have the same hash value, such as the Birthday Attack.
If a room has 23 people in it, there is a significant chance that two people will have the same
birthday. Chances increase with the size of the environment.
MD5 (128-bit) will have a collision after 2^64 calculations
Known-Plain Text Attack – attempting to attain the key when you have the encrypted text and all or
some of the plain text.
In WWII, the Germans and Japanese always started a transmission with a certain phrase. The
Allies knew this phrase and could record the encrypted message, and were able to break the code
Chosen-Plaintext Attack – attacker has the ability to encrypt chosen portions of the plaintext and
compares it to the encrypted portion to discover the key
Cipher-Text Only Attacks – Attacker collects many messages encrypted with the same key, and
uses statistical analysis to break the encryption
Chosen Ciphertext Attack – cryptanalyst can choose the cipher text to be decrypted. Thus, they
have ciphertext and plaintext for messages that they choose.
Known Key – attacker may have some knowledge about the key. This reduces the number of
variations they have to consider to guess the key (ex: Passwords must be 8-12 characters)
Meet-in-the-middle Attack – Encrypts the plaintext using all possible keys and creates a table with
all possible results. Then they decrypt the plaintext using all possible keys. This is why 3DES is
used over 2DES.
Side-Channel Attack – uses physical data to break cryptosystem, such as monitoring CPU cycles
or power consumption used while encrypting or decrypting. Longer keys may require more CPU
cycles for examination.Meltdown and Spectre are examples
Implementation Attacks -exploit a vulnerability with the actual system used to perform the math.
System may leave plaintext in RAM, or the key may be left on the hard drive.
PKI
Public Key Infrastructure. Provides a way to manage digital certificates, which is a public key
signed with a digital signature. The standard digital certificate format is X.509.
A digital certificate will contain:
Serial number
ID info
Digital signature of signer
X.509 version
Validity period
A certificate will NOT contain “Receiver’s name”.
The Certificate Authority creates certs to verified users. Examples include Verisign, DigiSign,
Comodo, GoDaddy, etc.
Certificate Policy – set of rules dictating the circumstances under which a cert can be used. Used to
protect Cas from claims of loss if the cert is misused.
Certificate Revocation List (CRL) – identifies certs that have been revoked due to theft, fraud,
change in information associated with the CA, etc. Expired certs are not on the CRL. DOES NOT
UPDATE INFO IN REAL TIME
Online Certificate Status Protocol – used to query the CA as to the status of a cert issued by
that CA. Useful in large environments. Responds to a query with the status of valid,,
suspended or revoked
There are several models:
Hierarchical Trust – Root CA can delegate intermediate CAs to issue certs on its behalf.
Web of Trust – All parties involved trust each other equally. No CA to certify certificate owners
Hybrid-Cross Certification – Combination of hierarchical and mesh models. Common for when
two different organizations establish trust relationships. A trust is created between two Root
CAs, and each organization trusts the others’ certificates.
Key Storage
Placement of a copy of secret keys in a secure location. Two methods:
Software-based
Hardware-based -
Key Escrow – keys needed to decrypt cyphertext are held in escrow so that, under certain
cirumstances, an authorized third party may gain access to those keys.
Recovery Agent – has authority to remove lost/destroyed keys from escrow. Requires at least
TWO agents (M of N controls – a number (n) of agents must exist in an environment. Of those
agents, a minimum number (m) must work together to recover a key). This is an example of
Split Knowledge, where information or privilege required to perform an action is divied among
multiple users.
IPSec
Ipsec is a security architecture framework that supports secure communication over IP. It
establishes a secure channel in either transport or tunnel mode, and is used primarily for VPNs.
IPSec can be run in either tunnel mode or transport mode. Each of these modes has its own
particular uses and care should be taken to ensure that the correct one is selected for the solution:
As you can see in the Figure 1 graphic below, basically transport mode should be used for end-to-
end sessions and tunnel mode should be used for everything else.
FIGURE: 1
AH provides integrity and authentication and ESP provides integrity, authentication and
encryption.
ESP provides authentication, integrity, and confidentiality, which protect against data
tampering and, most importantly, provide message content protection.
ESP can be operated in either tunnel mode (where the original packet is encapsulated into a
new one) or transport mode (where only the data payload of each packet is encrypted,
leaving the header untouched).
OSI Model – Please Do Not Teach Students Pointless Acronyms. Developed by ISO
Encapsulation is when the payload has the headers and footers added as the message goes down
layers. Decapsulation is the unwinding of the message as it travels back up. This means data has
the most information at the physical layer
Functions and Protocols in the OSI Model
For the exam, you will need to know the functionality that takes place at the different layers of the
OSI model, along with specific protocols that work at each layer. The following is a quick overview of
each layer and its components.
Application
The protocols at the application layer handle file transfer, virtual terminals, network management,
and fulfilling networking requests of applications. A few of the protocols that work at this layer include
Presentation
The services of the presentation layer handle translation into standard formats, data compression
and decompression, and data encryption and decryption. No protocols work at this layer, just
services. The following lists some of the presentation layer standards:
American Standard Code for Information Interchange (ASCII)
Extended Binary-Coded Decimal Interchange Mode (EBCDIC)
Tagged Image File Format (TIFF)
Joint Photographic Experts Group (JPEG)
Motion Picture Experts Group (MPEG)
Musical Instrument Digital Interface (MIDI)
Session
The session layer protocols set up connections between applications; maintain dialog control; and
negotiate, establish, maintain, and tear down the communication channel. Some of the protocols that
work at this layer include
The protocols at the transport layer handle end-to-end transmission and segmentation of a data
stream. The following protocols work at this layer:
Network
The responsibilities of the network layer protocols include internetworking service, addressing, and
routing. The following lists some of the protocols that work at this layer:
The protocols at the data link layer convert data into LAN or WAN frames for transmission and
define how a computer accesses a network. This layer is divided into the Logical Link Control (LLC)
and the Media Access Control (MAC) sublayers. Some protocols that work at this layer include the
following:
Network interface cards and drivers convert bits into electrical signals and control the physical
aspects of data transmission, including optical, electrical, and mechanical requirements.The
following are some of the standard interfaces at this layer:
Carrier sense means that a transmitter uses feedback from a receiver to determine whether
another transmission is in progress before initiating a transmission. That is, it tries to detect
the presence of a carrier wave from another station before attempting to transmit. If a carrier
is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission. In other words, CSMA is based on the principle "sense before transmit" or
"listen before talk".
Multiple access means that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations connected to the
medium.
Carrier Sense Multiple Access With Collision Detection (CSMA/CD) is a media access
control method used most notably in local area networking using early Ethernet technology.
It uses a carrier sensing scheme in which a transmitting data station detects other signals
while transmitting a frame, and stops transmitting that frame, transmits a jam signal, and
then waits for a random time interval before trying to resend the frame.
CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is used
to improve CSMA performance by terminating transmission as soon as a collision is
detected, thus shortening the time required before a retry can be attempted.
Carrier sense multiple access with collision avoidance (CSMA/CA) (For Wireless)
Carrier sense multiple access with collision avoidance (CSMA/CA) in computer networking, is a
network multiple access method in which carrier sensing is used, but nodes attempt to avoid
collisions by transmitting only when the channel is sensed to be "idle".[1][2] When they do transmit,
nodes transmit their packet data in its entirety.
It is particularly important for wireless networks, where the collision detection of the alternative
CSMA/CD is unreliable due to the hidden node problem.
CSMA/CA is a protocol that operates in the Data Link Layer (Layer 2) of the OSI model.
TCP/IP Model
Developed originally by DoD. Only has 4 layers. Comparison chart given above.
Exam Tips
L2TP Protocol works at data link layer. L2TP and PPTP were both designed for individual client to
server connections; they enable only a single point-to-point connection per session. Dial-up VPNs
use L2TP often. Both L2TP and PPTP operate at the data link layer (layer 2) of the OSI model.
PPTP uses native PPP authentication and encryption services and L2TP is a combination of PPTP
and Layer 2 Forwarding protocol (L2F).
Summary of the Tunneling protocols:
Point-to-Point Tunneling Protocol (PPTP):
Ethernet – today its used in a physical star topology with twisted pair cables
Bus – A straight line of devices. A is connected to B, which is connected to C. A single cable break
brings the network down.
Ring – A is connected to Z and B, B is connected to A and C, and so on. Doesn’t really improve on
a Bus topology
Star – Ethernet uses a star. Everything is connected to a central hub/switch/whatever. A cable
break only affects that single node. This provides fault tolerance.
Mesh – everything is connected to everything.
MAC Addresses – Media Access Control. 48 bits long. First 24 bits form the OUI, last 24 bit identify
the specific device.
EUI-64 MAC Address – created for 64-bit MA addresses. OUI is still 24 bits, but the serial
number is the last 40. Probably for IPv6
ARP resolves IP addresses to MAC addresses
ARP Cache Poisoning occurs when an attacker sends fake responses to ARP requests. This
can be countered by hardcoding ARP entries
IPv4 – 32-bit address written as four bytes in decimal (x.x.x.x)
TFTP (Trivial File Transfer Protocol) is sometimes used to transfer configuration files from equipment
such as routers but the primary difference between FTP and TFTP is that TFTP does not require
authentication. Speed and ability to automate are not important.
Both of these protocols (FTP and TFTP) can be used for transferring files across the Internet. The
differences between the two protocols are explained below:
FTP is a complete, session-oriented, general purpose file transfer protocol. TFTP is used as
a bare-bones special purpose file transfer protocol.
FTP can be used interactively. TFTP allows only unidirectional transfer of files.
FTP depends on TCP, is connection oriented, and provides reliable control. TFTP depends
on UDP, requires less overhead, and provides virtually no control.
FTP provides user authentication. TFTP does not.
FTP uses well-known TCP port numbers: 20 for data and 21 for connection dialog. TFTP
uses UDP port number 69 for its file transfer activity.
The Windows NT FTP server service does not support TFTP because TFTP does not
support authentication.
Windows 95 and TCP/IP-32 for Windows for Workgroups do not include a TFTP client
program.
Hardware
Hub – layer 1 device. Provides no security, confidentiality and security because it does not isolate
traffic. Half duplex, meaning it cannot send and receive simultaneously.
Repeater – has two ports. Receives traffic on one port and repeats it out the other
Switches – uses a SPAN (cisco) or mirror port to mirror all traffic through this particular port,
normally to send it to an IDS/IPS. One issue here can be bandwidth overload.
Routers – layer 3 device routes traffic from one network to another. Often times routers are default
gateways
VLANS
Separate broadcast domains, segment traffic which provides defense in depth
Firewalls
All firewalls are multi-homed, meaning they are connected to multiple networks (WAN and LAN)
Allow/block traffic using:
Ingress rules – traffic coming in
Egress rules – traffic going out
Generally deployed between a private network and a link to the internet.
Use an “implicit deny” rule
Rules at the top of an ACL take priority. Traffic that meets the first applicable rule will be used.
Screened-Host Architecture -when a router forces traffic to only go to a Bastion Host, which alone
can access LAN. A Bastion Host is a heavily secured device, such as a firewall, that would then
allow traffic to LAN. Creates a SPOF.
DMZ - “perimeter” or “edge” network. Two firewalls, public available resources sit in between them
to allow things like HTTPS and DNS through. The second firewall would stop anything from coming
into the internal network. DMZs can be accomplished with a single firewall, but creates
opportunities for misconfiguration
SEVERAL TYPES OF FIREWALLS:
Packet Filtering – works at layer 3 where the PDU is the packet. Filtering decisions are made on
each individual packet. Just looks at IP addresses and port numbers (header)
Stateful – stores info about who initiates a session, and blocks unsolicited communications
(nothing from the outside that didn’t originate internally can get through). This information is
stored on the firewall’s “state table”, which can be circumvent by flooding it with communication
requests.
Application Level Firewall – act as an intermediary server and proxies connection between
client and application server. They can see the entire packet as the packet won’t be encrypted
until layer 6. Other firewalls can only inspect the packet but not the payload. Application
firewalls can then detect unwanted applications or services attempting to bypass the firewall
Next-Gen Firewalls – bundle a ton of shit together, such as deep-packet inspected, application-
level inspection, IDS/IPS, integrating threat intel, etc.
FIREWALL GENERATIONS
Cabling
Electromagnetic Interference – caused by electricity and causes unwanted signals, or noise.
Crosstalk occurs when one wire leaks into another
Attenuation is the weakening of a signal as it travels further from the source
Twisted Pair cabling is the most common type of cabling. They are copper cables twisted with a
pair like ethernet
Unshielded Twisted Pair are the twisted pairs inside the cable. The twists provide protection
against EMI.
Shielded Twisted Pair has a sheath around each individual pair. This provides better
protection than UTP, but it is more expensive and more difficult to work with. Prices of fiber
are getting low enough that STP doesn’t make sense
Coax cable – More resistant to EMI than UTP or STP, and provides a higher bandwidth
Fiber Optic Cable – uses light pulses. Cable is made of glass so it is very fragile. It is immune to
EMI and much faster than coax. Several types:
Multimode – many paths of light. Shorter distance and lower bandwidth. Uses LED as its light
source
Single mode – one path, long haul, long side. Uses a laser
Multiplexing – sends multiple signals over different colors of light. Exceeds speeds of 10GB
For your exam you should know below information about WAN Technologies:
Point-to-point protocol
PPP (Point-to-Point Protocol) is a protocol for communication between two computers using a serial
interface, typically a personal computer connected by phone line to a server. For example, your
Internet server provider may provide you with a PPP connection so that the provider's server can
respond to your requests, pass them on to the Internet, and forward your requested Internet
responses back to you.
PPP uses the Internet protocol (IP) (and is designed to handle other protocol as well). It is
sometimes considered a member of the TCP/IP suite of protocols. Relative to the Open Systems
Interconnection (OSI) reference model, PPP provides layer 2 (data-link layer) service. Essentially, it
packages your computer's TCP/IP packets and forwards them to the server where they can actually
be put on the Internet.
PPP is a full-duplex protocol that can be used on various physical media, including twisted pair or
fiber optic lines or satellite transmission. It uses a variation of High Speed Data Link Control (HDLC)
for packet encapsulation.
PPP is usually preferred over the earlier de facto standard Serial Line Internet Protocol (SLIP)
because it can handle synchronous as well as asynchronous communication. PPP can share a line
with other users and it has error detection that SLIP lacks. Where a choice is possible, PPP is
preferred.
Frame Relay
Works as packet switching
Operates at data link layer of an OSI model
Companies that pay more to ensure that a higher level of bandwidth will always be available,
pay a committed information rate or CIR
1. Two main types of equipment are used in Frame Relay
1. Data Terminal Equipment (DTE) - Usually a customer owned device that provides
connectivity between company's own network and the frame relay's network.
2. Data Circuit Terminal Equipment (DCE) - Service provider device that does the actual data
transmission and switching in the frame relay cloud.
The Frame relay cloud is the collection of DCE that provides that provides switching and data
communication functionality. Frame relay is any to any service.
Integrated Service Digital Network (ISDN) : Enables data, voice and other types of traffic to travel
over a medium in a digital manner previously used only for analog voice transmission.
Runs on top of the Plain Old Telephone System (POTS). The same copper telephone wire is used.
Provide digital point-to-point circuit switching medium.
Classless Internet Domain Routing (CIDR) High Order bits are shown in bold below.
1. For Class A, the addresses are 0.0.0.0 - 127.255.255.255 The lowest Class A address is
represented in binary as 00000000.00000000.0000000.00000000
2. For Class B networks, the addresses are 128.0.0.0 - 191.255.255.255. The lowest Class B
address is represented in binary as 10000000.00000000.00000000.00000000
3. For Class C, the addresses are 192.0.0.0 - 223.255.255.255 The lowest Class C address is
represented in binary as 11000000.00000000.00000000.00000000
4. For Class D, the addresses are 224.0.0.0 - 239.255.255.255 (Multicast) The lowest Class D
address is represented in binary as 11100000.00000000.00000000.00000000
5. For Class E, the addresses are 240.0.0.0 - 255.255.255.255 (Reserved for future usage) The
lowest Class E address is represented in binary as
11110000.00000000.00000000.00000000 Classful IP Address Format
Domain 5: Identity and Access Management
IAAA Five elements:
Identification – claiming to be someone
Authentication – proving you are that person
Authorization – allows you to access resources
Auditing – records a log of what you do
Accounting – reviews log files to hold subjects accountable
Non-repudiation – prevents entities from denying they took an action. This is
accomplished by auditing and digital signatures
Authentication
5 Types
Type 1 – something you know
Type 2 – something you have. Tokens, smart cards, ID badge, etc.
Micro cards have a magnetic strip with info. They are easily copied
Smartcards utilize microprocessors and cryptographic certificates. Often paired with a PIN
Type 3 – something you are
Type 4 – Somewhere you are (IP address/location)
Type 5 – Something you do – signature, pattern lock
Types of Biometric Authentication Errors:
Type 1 – when a valid subject is not authenticated. Also known as False Rejection Rate (FRR)
Type 2 – when an invalid subject is incorrectly authenticated. Also known as False
Acceptance Rate (FAR)
The point where these intersect is called the Crossover Error Rate (CER) and is used as a metric
for evaluating biometric authentication solutions. This is discussed later in more detail
Any combination of these is 2FA or multifactor authentication
Types of passwords
Static – just a normal password. Most common and weakest type
Passphrases – long, static passwords combining multiple words
One-time passwords – Very secure but can be hard to implement across the board
Dynamic – tokens like FreeOTP and RSA
Cognitive – like recovery questions
Password Attacks
Type 2 Authentication
Steps:
Enrollment – initial registering of user with the biometric system, such as taking their fingerprints
Throughput – time required for users to actually authenticate, such as swiping a badge to get in
each morning. Should not exceed 6-10 seconds
Fingerprints are very common. They measure ridge endings, bifurcations and other details of the
finger, called minutiae. (Know that these terms are associated with fingerprinting).The entire
fingerprint isn’t normally detected. A scanner only needs to match a few points that match your
enrollment print exactly to authenticate you.
Retina Scans look at the blood vessels in your eyes. This is the second most accurate biometric
system but is rarely used because of health risks and invasion of privacy issues by revealing health
information
Iris scan – looks at the colored portion of your eye. Works through contact lenses and glasses.
Each person’s two irises are unique, even among twins. This is the most accurate biometric
authentication factor.The primary benefit of iris scanning is in the fact that irises do not change as
often as other biometric factors
Hand Geometry/Palm Scans - require a second form of authentication. They aren’t
reliable and can’t even determine if a person is alive
Keyboard dynamics – rhythm of keypresses, how hard someone presses each key, speed of
typing. Cheap to implement, somewhat effective.
Signature Dynamics – same thing, just a physical signature
Voiceprint – not secure, vulnerable to recordings, voices may change due to illness and other
factors.
Facial Scans – like iPhone face-unlock feature.
All biometric factors can give incorrect results and are subject to:
False Negatives – “False Rejection Rate (FRR)” Type 1 Error. Incorrectly rejects someone
False Positive – “False Acceptance Rate (FAR)” Type 2 Error. Incorrectly allows access
Always remember Type 2 is greater than Type 1 hence it causes more damage to security.
You must increase sensitivity until you reach an acceptable Crossover Error Rate (CER), which
is where FAR and FRR intersect. Lower is better, so use this as a metric when comparing
vendor products
Reasons against biometrics:
Many people feel it is intrusive and has health concerns
Time for enrollment and verification can be excessive
No way to revoke biometrics
Centralized Access Control – uses one logical point for access, like a domain controller. Can
provide SSO and AAA services (Authentication, Authorization and Accountability).
SSO is more convenient because a user only has to authenticate once. Examples include
Kerberos and Sesame (EU version of Kerberos). A Federation refers to two or more
companies that share IAM systems for SSO.
A Federated Identity is an identity that can be used across those different services
Finding a common language is often a challenge with federations.
SAML – security assertion markup language is commonly used to exchange authentication and
authorization info between federated organizations. Used to provide SSO capabilities for
browser access.
OpenID is similar to SSO, but is an open standard for user authentication by third parties.
Oauth is an open standard for authorization (not authentication) to third parties. Ex: if you have a
LinkedIn account, the system might ask you to let it have access to your Google contacts
OAuth2 combines authentication and authorization and is quickly removing the need for
OpenID.
Exam Tip :
XACML is the OASIS standard that is most commonly used by SDN systems. XACML is an
Extensible Markup Language (XML)-based open standard developed by OASIS that is typically used
to define access control policies. These policies can be either attribute-based or role-based.
SAML is an OASIS standard that is most commonly used by web applications to create a single
sign-on (SSO) experience. SAML is an XML-based open standard. SAML can be used to exchange
authentication and authorization information. Similar to OpenID and Windows Live ID, a SAML-
based system enables a user to gain access to multiple independent systems on the Internet after
having authenticated to only one of those systems.
SPML is an XML-based open standard developed by OASIS. SPML is used for federated identity
SSO. Unlike SAML, SPML is also based on Directory Services Markup Language (DSML). DSML is
an XML-based technology that can be used to present Lightweight Directory Access Protocol
(LDAP) information in XML format.
OAuth 2.0 is not an OASIS standard. Instead, OAuth 2.0 is an open standard defined in Request for
Comments (RFC) 6749. OAuth 2.0 is an authorization framework that provides a third-party
application with delegated access to resources without providing the owner's credentials to the
application. There are four roles in OAuth: the resource owner, the client, the resource server, and
the authorization server. The resource owner is typically an end user. The client is a third-party
application that the resource owner wants to use. The resource server hosts protected resources,
and the authorization server issues access tokens after successfully authenticating the resource
owner; the resource server and authorization server are often the same entity but can be separate.
Domain 7: Security Operations
Forensics
Digital Forensics – focuses on the recovery and investigation of material found in digital devices,
often related to computer crime. Closely related to incident response as it is based on gathering
and protecting evidence. Biggest difference is that it’s not what you know, it’s what you can prove in
court. Evidence is much more valuable.
International Organization of Computer Evidence’s 6 Principles for Computer Forensics:
1. All of the general forensic and procedural principles must be applied
2. Actions taken should not change evidence
3. Person investigating should be trained for the purpose
4. All activity must be fully documented, preserved and available for review
5. An individual is responsible for all actions taken with respect to digital evidence
6. Any agency, which is responsible for seizing/accessing/storing/transferring digital evidence is
responsible for compliance with these principles.
Binary images are required for forensics work. You never work on the original media. A binary
image is exactly identical to the original, including deleted files.
Certified forensic tools include Norton Ghost, FTK Imager and EnCase
Four types of disk-based forensic data:
Allocated space – normal files
Unallocated space – deleted files
Slack space – leftover space at the end of clusters. Contains fragments of old files
Bad blocks – ignored by OS. May contain hidden data
Criminal vs. Civil Law – Criminal law has higher standards for evidence.
Administrative Law
Also called Regulatory Law. Consists of regulations like HIPAA
Legal aspects of investigations
Evidence –Real Evidence (physical objects), Direct Evidence (Witness testimony),
Circumstantial Evidence (Indirect evidence of guilt, can support other evidence but is inadequate
for a conviction alone.)
Best evidence is going to be the original documents/hard drives etc
Secondary would include copies of original evidence, log files, etc
Evidence integrity contingent on hashes
Hearsay is inadmissible in court. In order for evidence to be admissible, it must be relevant to a
fact at issue, the fact must be material to the case, and the evidence mast have been legally
collected.
Evidence can be surrendered, obtained through a subpoena which compels the owner to
surrender it, or forcefully obtained before a subject has an opportunity to alter it through a
warrant
Chain of custody – documents entirely where evidence is at all times. If there is any lapse, the
evidence will be deemed inadmissible
Reasonable Searches - 4th amendment protects against unreasonable search and seizure.
Searches require warrants
Exceptions when an object is in plain site, at public checkpoints, or exigent circumstances
(immediate threat to human life or evidence being destroyed).
Conduct logging and monitoring activities
Involves the monitoring and detection of security events. Need clear processes and responses
What is a computer security incident? - an event that has a negative outcome vis-a-vis C I A against
one or more assets as the result of a deliberate attack or intentional malicious action on the part of
one or more users.
NIST SP 800-61 R2 "Computer Security Incident Handling Guide" defines a computer security
incident as "a violation or imminent threat of violation of computer security policies, acceptable use
policies, or standard security practices."
Understand that for the exam incident = computer security incident
Incident Response Steps: (on-going & feedback loop between Lessons Learned & Detection)
1. Detection/Identification –IDS | IPS | A/V software | Continuous Monitoring | End User Awareness.
Analyzing events to determine if a security incident has taken place through the examination of
log files.
2. Response/Containment –. The use of a Computer Security Incident Response Team (CSIRT) or
Computer Incident Response Team (CIRT) is a FORMALIZED response. Remember that volatile
evidence exists in all systems, and if they are turned off before a response team has been able to
examine them, data/evidence will be lost. Isolation is good. powering off is bad. Preventing further
damage by isolating traffic, taking the system offline, etc. You would often make binary images of
systems involved
3. Mitigation/Eradication – Containment!!! System is cleaned.
4. Reporting – up the chain within the organization & perhaps externally to Law Enforcement/official
agencies. Notifying proper personnel. Two kinds
i. Technical – appropriate technical individuals
ii. Non-technical – stakeholders, business owners, regulators and auditors
5. Recovery – putting the system back into production. Monitoring to see if the attack resumes
6. Remediation – Root cause analysis is performed to determine and patch the vulnerability that
allowed the incident. New processes to prevent recurrence are created.
7. Lessons Learned - after action meeting to determine what went wrong, what went well and what
could be improved on. Final report delivered to management.
IdentificationPreservationCollectionExaminationAnalysisPresentationDiscussion
People aren't good at remembering 32-bit IP Addresses for sites they wish to visit so we created
DNS - Domain Name Service. It resolves Domain.com to an IP Address so that a web browser or
other application can find the server by domain name alone.
Older versions of Windows computers were particularly vulnerable to name resolution attacks
because a simple use account can change settings in the name resolution process. Since the user
can do it, malware can do it for the user so now www.bank.com points to a malicious IP Address
with a copy of the legitimate website, tricking users into logging in and losing their usernames and
passwords.
CWBLHD - Can We Buy Large Hard Drives used to be the mnemonic for remembering the name
resolution process:
C = Cache: Has the name already been resolved and is it in my local cache?
W = WINS Server: Does the WINS server know the IP Address of the NetBIOS host name?
B = Broadcast: Does a nearby host respond to name resolution requests?
L = LMHosts File: Check the LMHosts file in the C:\Windows\System32\drivers\etc directory?
H = Hosts File: In C:\Windows\System32\drivers\etc, you will find a file called the Hosts file. Look at
it and you'll see some example name entries. It's locked while in use now but it can be edited and
malicious names can be entered so that www.bank.com now points to a malicious IP Address.
D = DNS: Domain Name Resolution is the last step in the name resolution process when it should
have been the first and most trusted.
Anyhow, name resolution security is a critical component so take it seriously and lock down your
DNS Server.
Here's a mnemonic for BCP can someone align the stages of BCP/DRP
Project initiation - Feasibility, cost, risk analysis, Management approval, basic security objectives
Functional analysis and planning - Define need, requirements, review proposed security controls
System design specifications - Develop detailed design specs, Review support documentation,
Examine security controls
Software development - Programmers develop code. Unit testing Check modules. Prototyping,
Verification, Validation
Acceptance testing and implementation - Separation of duties, security testing, data validation,
bounds checking, certification, accreditation, part of release control
System Life Cycle (SLC) (extends beyond SDLC)
Operations and maintenance - release into production.
Certification/accreditation
Revisions/ Disposal - remove. Sanitation and destruction of unneeded data
Change Management:
1. Request the change. Once personnel identify desired changes, they request the change.
Some organizations use internal websites, allowing individuals to submit change requests via
a web page. The website automatically logs the request in a database, which allows
personnel to track the changes. It also allows anyone to see the status of a change request.
2. Review the change. Experts within the organization review the change. Personnel reviewing a
change are typically from several different areas within the organization. In some cases, they
may quickly complete the review and approve or reject the change. In other cases, the change
may require approval at a formal change review board after extensive testing.
3. Approve/reject the change. Based on the review, these experts then approve or reject the
change. They also record the response in the change management documentation. For
example, if the organization uses an internal website, someone will document the results in
the website’s database. In some cases, the change review board might require the creation of
a rollback or back-out plan. This ensures that personnel can return the system to its original
condition if the change results in a failure.
4. Schedule and implement the change. The change is scheduled so that it can be implemented
with the least impact on the system and the system’s customer. This may require scheduling
the change during off-duty or nonpeak hours.
5. Document the change. The last step is the documentation of the change to ensure that all
interested parties are aware of it. This often requires a change in the configuration
management documentation. If an unrelated disaster requires administrators to rebuild the
system, the change management documentation provides them with the information on the
change. This ensures they can return the system to the state it was in before the change.
Together, change and configuration management techniques form an important part of the
software engineer’s arsenal and protect the organization from development-related security
issues.
The change management process has three basic components:
Request Control - provides an organized framework within which users can request
modifications, managers can conduct cost/ benefit analysis, and developers can prioritize
tasks.
Change Control - provides an organized framework within which multiple developers can
create and test a solution prior to rolling it out into a production environment. Change control
includes conforming to quality control restrictions, developing tools for update or change
deployment, properly documenting any coded changes, and restricting the effects of new
code to minimize diminishment of security.
Release Control - Once the changes are finalized, they must be approved for release
through the release control procedure.
Programming Concepts
Machine Code is binary language built into a CPU. Just above that is assembly language, which
are low level commands. Humans use source-code and convert it into machine code with
compilers.
Interpreters can translate each line of code into machine language on the fly while the program
runs
Bytecode is an intermediary form between source and machine code ready to be executed in a
Java Virtual Machine
Procedural – uses subroutines, procedures and functions, step-by-step. Examples include C and
FORTRAN
Object-oriented – define abstract objects through the uses of classes, attributes and methods.
Examples include C++ and Java
A class is a collection of common methods that define actions of objects
Databases are structured collections of data that allow queries (searches), insertions, deletions and
updates
Database Management Systems are designed to manage the creation, querying, updating and
administration of databases. Examples include MySQL, PostgreSQL, Microsoft SQL Server, etc
Types of databases:
Relational (RDBMS)– most common. Uses tables which are made up of rows and columns.
A row is a database record, called a tuple. The number of rows is referred to as a table’s
cardinality.
A column is called an attribute. The number of columns is referred to a table’s degree
There is no limit to how many candidate keys can be in a table, however there can only be one
Primary
Polyinstantiation is the concept of allowing multiple records that seem to have the
same primary key values into a database at different classification levels. This
means it can be used to prevent unauthorized users from determining classified
info by noticing the absence of info normally available to them
Foreign Keys – enforce relationships between two tables, also known as referential integrity.
This ensures that if one table contains a foreign key, it corresponds to a still-existing primary key
in the other table.
Hierarchical
Database Normalization
Rules that remove redundant data and improves the integrity and availabiity of the database
Three rules:
First Normal Form (1NF) – divide data into tables
Second Normal Form (2NF) – move data that is partially dependent to the primary key to
another table (all the shit in the table has to be on the same topic)
Third normal Form (3NF) – remove data that is not dependent on the primary key
During software testing, APIs, User Interfaces (UIs) and physical interfaces are tested
Waterfall - has a feedback loop that allows progress one step backwards or forwards.
o Emphasis on early documentation
o The Modified Waterfall Model adds validation/verification to the process
Spiral – improves on the two previous models because each step of the process goes through the
entire development lifecycle
Agile – highest priority is satisfying the customer through early and continuous delivery. It does not
prioritize security.
o Agile Manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Software Development Lifecycle Phases
Software Escrow
DevOps
Old system had strict separation of duties between devs, quality assurance and production
DevOps is more agile with everyone working together in the entire service lifecycle
Maturity Models
Software Capability Maturity Model (SW-CMM) – states all software development matures through
phases in a sequential fashion. Intends to improve maturity and quality of software by implementing
an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes.
1.
1. Initial – developers are unorganized with no real plan. No defined software development process
2. Repeatable – lifecycle management processes are introduced. Code is reused.
3. Defined – devlopers operate according to a set of documented processes. All actions occur within
constraints of those processes.
4. Managed – Quantitative measures are used to understand the development process.
5. Optimizing – Processes for defect prevention, change management, and process change are
used.
Java, C++, etc. Objects contain data and methods. Objects provide data hiding.
Object – account, employee, customer, whatever
Method – actions on an object
Class – think of a blueprint. Defines the data and methods the object will conain. Does not contain
data or methods itself, but instead defines those contained in objects
Polymorphism – objects can take on different forms. This is common among malware that modifies
its code as it propagates to avoid detection.
The description of the database is called a schema, and the schema is defined by a Data Definition
Language (DDL).
A data definition language (DDL) or data description language (DDL) is a syntax similar to a
computer programming language for defining data structures, especially database schemas.
The data definition language concept and name was first introduced in relation to
the Codasyl database model, where the schema of the database was written in a language
syntax describing the records, fields, and sets of the user data model. Later it was used to refer to a
subset of Structured Query Language (SQL) for creating tables and constraints. SQL-
92 introduced a schema manipulation language and schema information tables to query schemas.
These information tables were specified as SQL/Schemata in SQL:2003. The term DDL is also used
in a generic sense to refer to any formal language for describing data or information structures.
Data Definition Language (DDL) statements are used to define the database structure or schema.
‚¢CREATE - to create objects in the database
‚¢ALTER - alters the structure of the database
‚¢DROP - delete objects from the database
‚¢TRUNCATE - remove all records from a table, including all spaces allocated for the records are
removed
‚¢COMMENT - add comments to the data dictionary
‚¢RENAME - rename an object
DML The Data Manipulation Language (DML) is used to retrieve, insert and modify database
information. These commands will be used by all database users during the routine operation of the
database. The Data Manipulation Language (DML) is used to retrieve, insert and modify database
information. These commands will be used by all database users during the routine operation of the
database. Some of the command are:
INSERT - Allow addition of data
SELECT - Used to query data from the DB, one of the most commonly used command.
UPDATE - Allow update to existing Data
Quick Review of Policies, Standards, Procedures, baselines & Guidelines:
A standards document would include a list of software that must be installed on each computer. A
comprehensive security program includes the following components:
● Policies
● Procedures
● Standards
● Baselines
● Guidelines
Standards define the technical aspects of a security program, including any hardware and software
that is required. Because standards are mandatory, they must be followed by employees. Standards
should be detailed so that there are no questions as to what technologies should be used. For
example, a standard might state that employees must use a Windows 7 Professional desktop
computer with a multicore processor, 4 gigabytes of memory, and 1 terabyte of disk storage.
Standards help to ensure consistency, which can be beneficial while troubleshooting or when
implementing disaster recovery (DR) procedures.
Policies provide a high-level overview of the company's security posture, creating the basic
framework upon which a company's security program is based. A policy contains mandatory
directives that employees must follow. A well-formed policy should include the following four
elements:
● Purpose – the reason the policy exists
● Scope – the entities that are covered by the policy
● Responsibilities – the things that covered individuals must do to comply with the policy
● Compliance – how to measure the effectiveness of the policy and the consequences for violating
the policy
When creating a policy, you should not be too specific, because the methods for implementing the
policy will change over time. In addition, the policy should not be too technical, because the
technologies that will be covered under the policy will also change over time. By avoiding specific
and technical terms, you will ensure that the policy can survive without updates for a longer period of
time.
Procedures are low-level guides that explain how to accomplish a task. Like policies and standards,
procedures are mandatory. However, unlike policies, procedures are specific, providing as much
detail as possible. A procedure should also be as clear as possible so that there are no questions as
to how each step of the procedure should be followed. For example, a company might have a
procedure for terminating employees. If step 4 of the procedure is to take the terminated employee's
security badge and step 5 is to escort the employee from the building, the employee should not be
escorted from the building until the employee's security badge is surrendered. By following each step
of a procedure, company employees can mitigate social engineering attacks.
[17/02, 5:15 am] S▪G▪: Baselines provide a minimum level of security that a company's employees
and systems must meet. Like standards, baselines help to ensure consistency. However, unlike
standards, baselines are somewhat discretionary. For example, a baseline might state that
employees' computers must run Ubuntu 11.10 or a later version. Employees are not required to use
Ubuntu 11.10; they can use any version of Ubuntu as long as it is version 11.10 or higher. If
employees must use a particular version of software, you should create a standard instead.
Guidelines provide helpful bits of advice to employees. Unlike policies, procedures, and standards,
guidelines are discretionary. As a result, employees are not required to follow guidelines, even
though they probably should. For example, a guideline might contain recommendations on how to
avoid malware. If you want to require that employees follow certain steps to avoid malware, you
should create a procedure; if you want to require that employees use a certain brand or version of
antivirus software, you should create a standard.
SDLC mnemonic >> A Dance in the Dark Every Monday, representing Analysis, Design,
Implementation, Testing, Documentation and Execution, and Maintenance
SOC Reports Quick review:
SOC 1 – Internal Use, available to company auditor’s controller’s office – Not public
SOC 2 – An Audit under 5 criteria the security, availability, or processing integrity of a service
organization’s system or the privacy or confidentiality of the information the system processes.
Available to management and all others under strict NDA – Not widely public
Type 1 - This is a SOC 1 report but scoped to a particular date. Because it is a date instance, it is
not as telling as a Type 2.
SOC 1 Type 2 - This is a SOC 1 report but scoped to a particular date range. Because it covers a
date range instead of an individual date, it is a more thorough and telling report.
A passphrase does not typically change automatically at consistent intervals. However, you can
manually change a passphrase. Passphrases are typically long and contain few random characters.
The sentence-like structure of passphrases makes them easy to remember. The length of a
passphrase can help offset a lack of randomness. However, a passphrase can be easily cracked if it
is a string of text that can be found in a book, song, or other written work. Because some systems
limit the number of characters that can be used in a password or require a certain amount of
complexity and randomness in the string, the creation of passphrases is not always an option.
A one-time password does not typically change automatically at consistent intervals. You cannot
typically change a one-time password. A one-time password is a short string of characters that can
be used only once. One-time passwords can be simple words from a dictionary or, depending on the
password complexity requirements of the authenticating system, a combination of characters that
does not form a dictionary word.
A static password does not change automatically at consistent intervals. However, you can manually
change a static password. A static password is a short string of characters that can be used more
than once. Static passwords can be simple words from a dictionary or, depending upon the
password complexity requirements of the authenticating system, a combination of characters that
does not form a dictionary word.
BCP sequence is
1. Develop a BCP policy statement
2. Conduct a BIA
3. Identify preventive controls
4. Develop recovery strategies
5. Develop an IT contingency plan.
6. Perform DRP training and testing
7. Perform BCP/DRP maintenance.
Wireless Configuration Modes: The wireless clients are configured to use infrastructure mode.
Infrastructure mode, which is also called master mode, is the most commonly used 802.11 wireless
mode. In infrastructure mode, wireless clients use an AP to communicate with other clients.
The wireless clients are not configured to use ad hoc mode. Ad hoc mode, which is sometimes
called peer-to-peer mode, enables two 802.11 wireless clients to communicate directly with each
other without an AP. This mode is useful for sharing files from one computer to another. Some
wireless printers are designed to communicate with print clients by using ad hoc mode.
The wireless clients are not configured to use client mode. Client mode, which is also called
managed mode, is used to allow wireless clients to communicate only with the AP. In client mode,
wireless clients cannot communicate with other clients. This mode is useful for centralized
configuration of wireless devices through the AP.
The wireless clients are not configured to use monitor mode. Monitor mode is used to sniff WLAN
traffic. This is a read-only mode; data can be received but not sent by a wireless client in monitor
mode.
BCP Test: A structured walk-through is the method of DR plan testing that is also known as a table-
top exercise. DR planning involves storing and maintaining data and hardware in an offsite location
so that the alternate assets can be used in the event that a disaster damages hardware and data at
the primary facility. Properly planning for a disaster can help a company recover as quickly as
possible from the disaster. However, plans must be tested to ensure viability before a disaster
occurs. There are five primary methods of testing a DR plan:
● A read-through test
● A structured walk-through / table-top exercise
● A simulation test
● A parallel test
● A full-interruption test
A read-through test involves the distribution of DR plan documents to other members of the DR
team. Each member of the team reviews the DR plan documents to ensure familiarity with the plan,
to identify obsolete or erroneous material, and to identify any DR plan roles that are missing
personnel assignments.
A structured walk-through test, which is also known as a table-top exercise, is a step beyond a read-
through test in that the DR team gathers around a table to role-play the DR plan in person given a
disaster scenario known only to the gathering's moderator. Following the structured walk-through
test, team members review the DR plan to determine the appropriate actions to take given the
scenario.
Software Testing:
A simulation test is a step beyond a structured walk-through in that members of the DR team are
asked to develop a response to a given disaster. The input from the DR team is then considered for
testing against a simulated disaster to determine viability.
A parallel test is a step beyond a simulation test in that employees are relocated to the DR plan's
recovery location. At the location, employees are expected to activate the recovery site just as they
might when faced with a genuine disaster. However, operations at the primary site are never shut
down.
A full-interruption test is a step beyond a parallel test in that employees are relocated to the DR
plan's recovery location and a full shutdown of operations occurs at the primary location. Of all the
DR plan testing methods, a full-interruption test is the least likely to occur because it results in
significant business interruption that might be deemed unnecessary by management.
Smoke Sensors:An electrical charge is a method of fire detection that is typically used by smoke
sensors. Both ionization and photoelectric smoke sensors create an electrical charge that can be
interrupted by the presence of smoke. Ionization smoke sensors use a radioactive emission to
create the charge. Photoelectric sensors create the charge by using a light emitting diode (LED) that
sends a signal to a photoelectric sensor. When smoke interrupts the electric charge, the sensor will
trigger an alarm. Smoke sensors can generate false positives by detecting dust or other airborne
contaminants as smoke.
Infrared light and ultraviolet light are methods of fire detection that are used by flame sensors, not
smoke detectors. Flame sensors work by detecting either infrared or ultraviolet light from fire.
Therefore, a flame sensor typically requires a line of sight with the source of a fire. By themselves,
flame sensors are appropriate for detecting fire only from specific sources.
Temperature is a method of fire detection that is used by heat sensors, not smoke sensors. Heat
sensors work by measuring the ambient temperature of an area. If the temperature exceeds a
predetermined threshold or if the temperature begins to rise faster than a predetermined rate, the
sensor will trigger an alarm.
SDLC Phases: The system will be tested by an independent third party in the acceptance phase.
The SDLC is a series of steps for managing a systems development project. The phases of the
SDLC include all of the following:
1. Initiation and planning is the development of the idea, which includes documenting the system's
objectives and answering questions that naturally flow from the idea.
2. Functional requirements definition is the process of evaluating how the system must function to fit
end-user needs, including any future functionality that might not exist in the system's first release.
3. System design specifications defines how data enters the system, flows through the system, and
is output from the system.
4. Development and implementation involves the creation of the system's source code, including the
selection of a development methodology. In addition, source code is tested, analyzed for efficiency
and security, and documented during this phase.
5. Documentation and common program controls is the process of documenting how data is edited
within the system, the types of logs the system generates, and the system revision process. The
number of controls that are required for a system can vary depending on the system's size, function,
and robustness.
6. Acceptance is the phase at which the system is tested by an independent third party.
The testing process includes functionality tests and security tests, which should verify that the
system meets all the functional and security specifications that were documented in previous
phases.
7. Testing and evaluation controls include guidelines that determine how testing is to
be conducted. The controls ensure that the system is thoroughly tested and does not interfere with
production code or data.
8. Certification is the process in which a certifying officer compares the system against a
set of functional and security standards to ensure that the system complies with those standards.
9. Accreditation is the process by which management approves the system for implementation. Even
if a system is certified, it might not be accredited by management; conversely, even if a system is not
certified, it might still be accredited by management.
10. Implementation is the phase at which the system is transferred from a development environment
to a production environment. The phases of the SDLC are part of a larger process that is known as
the system life cycle (SLC). The SLC includes two additional phases after the implementation phase
of the SDLC:
11. Operations and maintenance support is a post-installation phase in which the system is in use in
a live production environment. During this phase, the system is monitored for weaknesses that were
not discovered during development. In addition, the system's data backup and restore methods are
implemented during this phase.
12. Revisions and system replacement is the process of evaluating the live production system for
any new functionality or features that end users might require from the system. If changes to the
system are required, the revisions should step through the same phases of the SDLC that the
original version followed.
Security incident response is a seven-stage methodology that is used to mitigate the effects of a
security breach. A proper security incident response contains all of the following phases:
1. Detection involves the discovery of a security incident by using log reviews, detective
access controls, or automated analysis of network traffic.
2. Response is the process of activating the incident response team.
3. Mitigation is the process of containing the incident and preventing further damage.
4. Reporting is the process of documenting the security incident so that management and
law enforcement can be fully informed.
5. Recovery is the process of returning the system to the production environment.
Recovered systems should be carefully monitored to ensure that no traces of the agent that caused
the security breach remain on the system.
6. Remediation is the process of understanding the cause of the security breach and
preventing the breach from occurring again. The remediation phase is valuable because it can
identify flaws in a security system or process as well as ways of preventing similar security
incidents.
7. Lessons Learned is the process of reviewing of the incident to determine whether any
improvements in response can be made.
CVE, Common Vulnerabilities and Exposures, which defines the naming system for describing
vulnerabilities
CVSS,Common Vulnerability Scoring System, which defines a standard scoring system for
vulnerability severity
CCE, Common Configuration Enumeration, which defines a naming system for system
configuration problems
CPE, Common Platform Enumeration, which defines an operating system, application, and
device naming system
XCCDF, Extensible Configuration Checklist Description Format, which defines a language format
for security checklists
OVAL, Open Vulnerability and Assessment Language, which defines a language format for
security testing procedures
Here is an IS audit list steps: