CISSP End Game

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80
At a glance
Powered by AI
Some of the key takeaways from the document include preparing for the CISSP exam by reading study materials, solving practice questions, joining study groups, and refreshing knowledge with flashcards. Consistency is important in the preparation process.

The main phases of the SDLC include requirements, design, implementation, testing, implementation, operations and maintenance, and retirement.

The main steps of a security incident response include detection, response, mitigation, reporting, recovery, remediation, and lessons learned.

What worked for me to crack CISSP in 1 st Attempt

Read Sybex or CBK cover to cover once.


CISSP Training session of 4 weeks (weekends only). This will help you to expand your
knowledge through discussion and also help to expand your professional network.
Watched Kelly Handerhan Videos and Audios during my travel time to office
Watched IT Dojo Question of the day all videos. Don’t watch all in one go, use this resource
whenever you feel bored from reading book or solving questions.
 https://www.youtube.com/channel/UCwUkAunxT1BNbmKVOSEoqYA
Listen Simple CISSP by Phil Martin on Amazon Audible (US)
Solved CCCure more than 2000 questions. (Please ensure to read and understand
explanation part. No matter your answer is correct or incorrect, read explanation part and
understand it) with consistency of between 70%-80% score.
Solved Boson 750 Questions with Explanation understanding.
Refreshed my learnings through flash cards and quiz’s during my free time.
 https://quizlet.com/117386750/cissp-flash-cards/
 https://www.brainscape.com/flashcards/ch-1-4307018/packs/6347928
Solved free questions available online (Please ensure to do research for the options where
explanation is not given)
 https://vceguide.com/isc/cissp-certified-information-systems-security-
professional/page/46/
 https://www.briefmenow.org/isc2/category/cissp-certified-information-systems-
security-professional/
Joined CISSP Study groups on WhatsApp and Discord. Actively involved in the topic
discussions.
2 Weeks before exam, downloaded CISSP exam outline from (ISC)2 official website and
looked if I have covered all the topics mentioned there.
One week before exam, been through The Memory Palace & Sun Flower Document.
2 days before exam, covered my hand written notes and 11th Hour guide by Eric Conrad.

Please do not use this document as primary source of preparation. This can be helpful to give a
sharp and final edge to your CISSP preparation.

My experiences during CISSP journey, various websites, tech blogs, social media discussions &
notes from multiple CISSP books used to create this document. This document is created with intent
to give a quick refresher to your preparation at one place and help the professionals aspiring for
CISSP. I have not created this to violate any author’s copyrights. Please inform me @ my email
address if anyone find copyright violation. I will review and remove the same.

BEST OF LUCK

S G -CISSP

sonu2hcl@hotmail.com
Eligibility Criteria

Minimum 5 years of experience in at least 2 of the domains.

Official guidelines, form (ISC)2

Meet CISSP Eligibility

To qualify for this cybersecurity certification, you must have:

 At least five years of cumulative, paid, full-time work experience


 In two or more of the eight domains of the (ISC)2 CISSP Common Body of
Knowledge (CBK)

Don’t have enough work experience yet? There are two ways you can overcome this
obstacle.

You can satisfy one year of required experience with:

 A four-year college degree (or a regional equivalent)


 Or, an approved credential from the CISSP Prerequisite pathway
Your second option is to take and pass the CISSP exam to earn an Associate of (ISC)2
designation. Then, you’ll have up to six years to earn your required work experience to
become the full-fledged CISSP.
The domain weights are as follows:

Domains Weightings (Percentage)


Domain 1: Security and Risk Management 15%

Domain 2: Asset Security 10%

Domain 3: Security Architecture and Engineering 13%

Domain 4: Communication and Network Security 14%

Domain 5: Identity and Access Management (IAM) 13%

Domain 6: Security Assessment and Testing 12%

Domain 7: Security Operations 13%

Domain 8: Software Development Security 10%

Total 100%
Domain 1: Security and Risk Management

Understand and apply concepts of confidentiality, integrity and availability

 Finding & setting the right combination of Confidentiality, Integrity and Availability is a balancing
act.
 This is really the tough job of IT Security – finding the RIGHT combination for your organization.
 Too much Confidentiality and the Availability can suffer.
 Too much Integrity and the Availability can suffer.
 Too much Availability and both the Confidentiality and Integrity can suffer.
 The opposites of the CIA Triad is DAD (Disclosure, Alteration and Destruction).
 Disclosure – Someone not authorized gets access to your information.
 Alteration – Your data has been changed.
 Destruction – Your Data or Systems has been Destroyed or rendered inaccessible.

Evaluate and apply security governance principles

 Security Governance Principles – goal is to maintain business processes. IT security goals support
the business goals (compliance, guidelines, etc). You shouldn’t be looking for the best technical
answer, but the answer that best supports the business.

Top-Down approach is ideal. Upper management should always be involved. If you


see anything referring to a bottom-up approach, it’s wrong.

 Understand the difference between subjects and objects


Subjects – active entity on a system. Normally people. Programs can be subjects as
well, such as a script updating files
Object – passive data on a system

 Frameworks help avoid building IT security in a vacuum or without considering important concepts
 Can be regulations, non-regulation, industry-specific, national, international
 COBIT is an example. Set of best practices from ISACA. Five key principles.
Focuses on WHAT you’re trying to achieve. Also serves as a guideline for auditors:
Principle 1 – Meeting Stakeholder needs
Principle 2 – Covering the enterprise end-to-end
Principle 3 – Applying a single, integrated framework
Principle 4 – Enabling a holistic approach
Principle 5 – Separating governance from management
 ITIL is the de facto standard for IT service management. How you’re trying to achieve
something
 ISO 27000 series. Started as a British standard.
27005 refers to risk management
27799 refers to personal health info (PHI)

 OCTAVE: Operationally Critical Threat, Assets and Vulnerability Evaluation was developed at
Carnegie Mellon University’s CERT Coordination Center. This suite of tools, methods and
techniques provides two alternative models to the original. That one was developed for organizations
with at least 300 workers. OCTAVE-S is aimed at helping companies that don’t have much in the
way of security and risk-management resources. OCTAVE-Allegro was created with a more
streamlined approach. It is self-directed risk assessments.
 FAIR: Factor Analysis of Information Risk was developed to understand, analyze and measure
information risk. It also has the support of the former CISO of Nationwide Mutual Insurance, Jack
Jones. This framework has received a lot of attention because it allows organizations to carry out
risk assessments to any asset or object all with a unified language. Your IT people, those on the IRM
team and your business line staff will all be able to work with one another while using a consistent
language.
 TARA: The Threat Agent Risk Assessment was created back in 2010 by Intel. It allows companies to
manage their risk by considering a large number of potential information security attacks and then
distilling them down to the likeliest threats. A predictive framework will then list these threats in terms
of priority.
 ITIL: Information Technology Infrastructure Library provides best practices in IT Service
Management (ITSM). It was created with five different “Service Management Practices” to assist you
in managing your IT assets with an eye on preventing unauthorized practices and events.

 That does not mean you are not liable as well; you may be, and that depends on Due Care
 Due Diligence vs Due Care
 Due Diligence – the thought, planning or research put into the security architecture of
your organization. This would also include developing best practices and common
protection mechanisms, and researching new systems before implementing them
 Due Care – is an action. It follows the Prudent Person Rule which begs the question
“what would a prudent person do in this situation?” This includes patching systems,
fixing issues, reporting, etc in a timely fashion.
Negligence is the opposite of due care. If you did not perform due care to
ensure a control isn’t compromised, you are liable
Auditing is a form of due care
Liability - Senior leadership/Authorized personnel (If senior leadership not
available as an option) is always ultimately liable/responsible/

Determine compliance requirements


Compliance – Compliance means being aligned with the industry regulations, guidelines, and
specifications. It is a crucial part of security governance as accountability can only take place when
employees are properly taught the rules, standards, policies, and regulations that they need to
adhere to

 Important Laws/Regulations:
Health Insurance Portability and Accountability Act – States that covered entities should
disclose breaches in security pertaining to personal information. This act applies to health
insurers, health providers, and claims and processing agencies. It seeks to guard Protected
Health Info (PHI). Applies to “covered entities” (healthcare providers, health plans,
clearinghouses).
 HITECH Act of 2009 makes HIPAA’s provisions apply to business associates of
covered entities (lawyers, accountants, etc)
 Any PII associated with healthcare is Personal Health Info (PHI)

Gramm-Leach-Bliley Financial Modernization Act – This act covers financial agencies and
aims to increase protection of customer’s PII.
Patriot Act of 2011 – Provides a wider coverage for wiretapping and allows searching and
seizure without immediate disclosure.
Electronic Communications Privacy Act (ECPA) – Enacted in 1986, it aims to extend
government restrictions when it comes to wiretapping phone calls to cover transmissions of
electronic data.
Sarbanes-Oxley Act (SOX) – Enacted in 2006, response to ENRON scandal this helps
ensure that all publicly held companies should have their own procedures and internal
controls necessary for financial reporting. The act aims to minimize, if not eliminate,
corporate fraud.

.
There are six types of computer crimes (not necessarily defined by CFAA. Need to know the
motives, which are obvious based on the name of the attack):
1. Military/intelligence
2. Business
3. Financial
4. Terrorist
5. Grudge
6. Thrill
Electronic Communications Privacy Act (ECPA) – protected against warrantless wiretapping
Patriot Act of 2001 expanded law enforcement electronic monitoring capabilities.
Communications Assistance to Law Enforcement Act (CALEA) requires all communications
carriers to make wiretaps possible for law enforcement with an appropriate court order.
PCI DSS – not a law. Self-regulation by major vendors. Mandates controls to protect
cardholder data.
Computer Security Act – required US federal agencies to ID computers that contain sensitive
info. They must then develop security policies for each system and conduct training for
individuals involved with those systems.
California Senate Bill 1386 – one of the first US state-level laws related to breach
notification. Required organizations experiencing data breaches to inform customers
Understand legal and regulatory issues that pertain to information security in a global
context

 EU Data Protection Directive – very pro privacy. Organizations must notify individuals regarding
how their data is gathered and used. Must allow an opt-out option for sharing with 3rd parties. Opt-
In is required for “most sensitive” data. Transmission of data outside of EU is not allowed unless
recipients have equal privacy protections. US does NOT meet this standard. Safe Harbor is an
optional agreement between the organization and the EU, where the organization must voluntarily
consent to data privacy principles that are consistent with this.
 EU Court of Justice overturned the Safe Harbor agreement in 2015. In 2016 the EU
Commission and US Department of Commerce established the EU-US Privacy Shield, a new
legal framework for transatlantic data transmission which replaced Safe Harbor.

 General Data Protection Regulation – went into effect in 2018 which replaced the above
restrictions.
Applies to all organizations worldwide that offer services to EU customers
Extends concept of PII to photos/videos/social media posts, financial transactions, location
data, browsing history, login credentials, and device identifiers
Data collection, retention and sharing must be minimized exclusively for the intended
purpose.
Data breach requires notification within 72 hours of discovery
Organizations that deal with personal data on a large scale must appoint a Data Protection
Officer to their boards
Focus of controls are on encryption and pseudonymization, which is the process of replacing
some data elements with pseudonyms and makes it more difficult to identify individuals.
 Wassenaar Arrangement – export/import controls for conventional arms and dual-use goods and
technologies. Cryptography is considered “dual use” This includes countries like Iran, Iraq, China
and Russia who want to spy on their citizens, and so they don’t import overly strong cryptography
technologies. US is not included in this. Companies like Google have to make country-specific
technology because of this
 Digital Millenium Copyright Act of 1998 - prohibits the circumvention of copy protection
mechanisms placed in digital media, and relieves ISP of liability for activities of users
 Intellectual Property Protections:
 Trademark – names, slogans and logos that identify a company/product. Cannot be
confusingly similar to any trademarks. They are good for 10 years, but can be renewed
indefinitely.
 Patent – Has to be registered with the US Patent office, which is public information. Many
companies avoid this, such as 3M. A patent is good for 20 years, and is considered the
shortest of all intellectual property protections.
 Copyright – Creative content such as songs, book and software code. It requires disclosure
of the product and expires 70 years after the death of the author.
 Licenses – End-User License Agreement (EULA) is a good example
 Trade Secrets – KFC’s special seasoning, coca-cola’s formula,, etc. Protected by NDAs
(non-disclosure agreement) and NCAs (non-compete agreements). These are the best
options for organizations that do not want to disclose any information about their products.
Trade Secrets have no expiration.
 Intellectual Property Attacks:
 Software piracy
 Copyright infringement
 Corporate espionage – when competitors attempt to ex-filtrate information from a company,
often using an employee.
 Economic Espionage Act of 1996 – provides penalties for individuals found guilty of the theft of
trade secrets. Harsher penalties if the offender is attempting to aid foreign governments.
 Typo squatting. Examples :

 Digital Rights Management is any solution that allows content owners to enforce any of the above
restrictions on music, movies, etc
Understanding SOC 1, SOC 2, and SOC 3 reports:
SOC 1 reports address a company's internal control over financial reporting, which pertains to the
application of checks-and-limits. By its very definition, as mandated by SSAE 18, SOC 1 is the audit
of a third-party vendor’s accounting and financial controls. It is the metric of how well they keep up
their books of accounts. There are two types of SOC 1 reports — SOC 1 Type I and SOC 1 Type II.
Type I pertains to the audit taken place on a particular point of time, that is, a specific single date.
While a Type II report is more rigorous and is based on the testing of controls over a duration of
time. Type II reports’ metrics are always judged as more reliable as they pertain to the effectiveness
of controls over a more extended period of time.
SOC 2 is the most sought-after report in this domain and a must if you are dealing with an IT vendor.
It is quite common for people to believe that SOC 2 is some upgrade over the SOC 1, which is
entirely untrue. SOC 2 deals with the examination of the controls of a service organization over, one
or more of the ensuing Trust Service Criteria (TSC):
 Privacy
 Confidentiality
 Processing Integrity
 Availability
 Security

SOC 2 is built around the definition of a consistent set of parameters around the IT services which a
third party provides to you. If you require to have a metric of a vendor’s providence of private,
confidential, available, and secure IT services - then, you need to ask for an independently audited
and assessed SOC 2 report. Like SOC 1, SOC 2 too has two types - SOC 2 Type I and SOC 2 Type
II.
Type I confirms that the controls exist. While Type II affirms that not just the controls are in place, but
they actually work as well. Of course, SOC 2 Type II is a better representation of how well the
vendor is doing for the protection and management of your data. But, the serviced party here has to
be very clear about this that the SOC 2 Type II report is to be audited by an independent CPA.
SOC 3 is not some kind of upgrade over the SOC 2 report. It may have some of the components of
SOC 2; still, it is entirely a different ball game. SOC 3 is a summarized report of the SOC 2 Type 2
report. So, yes, it is not as detailed as SOC 2 Type I report, or SOC 2 Type II reports are, but a SOC
3 report is designated to be a less technical and detailed audit report with a seal of approval which
could be put up on the website of the vendor.
Because it is less detailed and less technical, it might not contain the same level of vital intricacies of
the business auditing which you might require.
A business must request and analyze the SOC reports from your prospective vendors. It is an
invaluable piece of information to make sure that adequate controls are put in place and the controls
actually work in an effective manner.
Not just this, SOC reports — be it SOC 1, SOC 2, or SOC 3 — come very helpful in ensuring that
your compliance with the regulatory expectations is up to the mark

Professional Ethics – The CISSP has a code of ethics that is to be followed by certified information
security professionals. Those who intentionally or knowingly violate the code of ethics will face action
from a peer review panel, which, could result in the revocation and nullification of the certification.

Code of Ethics Preamble:

The safety and welfare of society and the common good, duty to our principals, and to each
other, requires that we adhere, and be seen to adhere, to the highest ethical standards of
behavior.
Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons:

1. Protect society, the common good, necessary public trust and confidence, and the
infrastructure.
2. Act honorably, honestly, justly, responsibly, and legally.
3. Provide diligent and competent service to principals.
4. Advance and protect the profession.

Exam Tip: Always be familiar with the order of the canons which are applied in order. Priority of the
ethics is in same order it is written, E.g. 1st canon is topmost priority and last canon is least priority if
there is any conflict in the given question on exam. If there is any ethical dilemma, you must follow
the order. Protecting society is more important than protecting the profession, for example

Develop, document, and implement security policy, standards, procedures and guidelines

 Security Policies - These are the highest level and are mandatory. Everything about an
organization’s security posture will be based around this. Specifies auditing/compliance
requirements and acceptable risk levels. Used as proof that senior management has exercised due
care. Mandatory that it is followed.
 Wouldn’t use terms like Linux or Windows. That’s too low level. It would refer to these things
as “systems”
 Driven by business objectives and convey the amount of risk senior management is willing to
accept.
 Easily accessible and understood by the intended audience.
 Should be reviewed yearly or after major business changes, and dated with version number
 Very non-technical
 Standards – mandatory actions and rules. Describes specific technologies.
 Used to indicate expected user behavior. For example, a consistent company email
signature.
 Might specify what hardware and software solutions are available and supported.
 Compulsory and must be enforced to be effective. (This also applies to policies!)
 Baselines – represents a minimum level of security. Often refer to industry standards like TCSEC
and NIST
 Somewhat discretionary in the sense that security can be greater than your baseline, but
doesn’t need to be.
 Guidelines – Simply guide the implementation of the security policy and standard. These are
NOT MANDOTARY in nature.
 Are more general vs. Specific rules.
 Provide flexibility for unforeseen circumstances.
 Should NOT be confused with formal policy statements.
 Procedures – very detailed step-by-step instructions. These are mandatory. Specific to system and
software. Must be updated as hardware and software evolves.
 Often act as the “cookbook” for staff to consult to accomplish a repeatable process.
 Detailed enough and yet not too difficult that only a small group (or a single person) will
understand.
 Installing operating systems, performing a system backup, granting access rights to a system
and setting up new user accounts are all example of procedures

Identify, analyze and prioritize Business Continuity requirements:

 Organizational plans:
 Strategic Plan – Long term (5 years). Goals and visions for the future. Risk assessments fall
under this
 Tactical Plan – useful for about a year. Projects, hiring, budget etc.
 Operational Plan – short term (month or quarter). Highly detailed, more step-by step.
Steps for Business Impact Analysis:
1. Identify the company’s critical business functions.
2. Identify the resources these functions depend upon.
3. Select individuals to interview for data gathering.
4. Create data-gathering techniques (surveys, questionnaires, qualitative and quantitative
approaches).
5. Calculate how long these functions can survive without these resources.
6. Identify vulnerabilities and threats to these functions.
7. Calculate the risk for each different business function.
8. Document findings and prepare BIA report.
Various Plans of BCP:

Understand and apply risk management concepts

The Risk Management Framework steps include:


Categorize the information system and the information processed, stored, and transmitted by that
system based on an impact analysis.
Select an initial set of baseline security controls for the information system based on the security
categorization; tailoring and supplementing the security control baseline as needed based on an
organizational assessment of risk and local conditions.
Implement the security controls and describe how the controls are employed within the information
system and its environment of operation.
Assess the security controls using appropriate assessment procedures to determine the extent to
which the controls are implemented correctly, operating as intended, and producing the desired
outcome with respect to meeting the security requirements for the system.
Authorize information system operation based on a determination of the risk to organizational
operations and assets, individuals, other organizations, and the Nation resulting from the operation of
the information system and the decision that this risk is acceptable.
Monitor the security controls in the information system on an ongoing basis including assessing
control effectiveness, documenting changes to the system or its environment of operation, conducting
security impact analyses of the associated changes, and reporting the security state of the system to
designated organizational officials.”
 Businesses don’t care about information security; they care about business. Security is concerned
with managing the risks to a business. Always maintain a balance between business needs and
security.
 Risk Management Concepts:
 Risk = Threat x Vulnerability
 Assets – valuable resources to protect. People are always the most important assets
 Vulnerability – a weakness
 Threat - potentially harmful occurrence
 Impact takes into account the damage in terms of dollar amounts. Human life is
considered infinite impact.
 Risk Analysis Types:
Qualitative – uses hard metrics, like dollars. Normally done to know where to focus qualitative
analysis. Primary focus is determining what takes priority.
 How likely is this to happen, and how bad would it be?
 EF and ARO will typically be described as a percentage. If a laptop is lost or stolen, the EF
would be 100%. But it’s possible for a fire to damage only 25% of a warehouse. Likewise, a
flood may only be anticipated once in every 100 years, making the ARO .01 or 1%
Quantitative - Determine worth of an asset, including how much money it generates, value to
competitors, legal liabilities, etc. Assign a dollar amount value

 Steps of quantitative analysis:


 Asset Value (AV): Evaluate loss potential caused by an instance of damage. You will
determine your Exposure Factor and then calculate your SLE by multiplying the AV by EF
(SLE = AV x EF)
 Physical damage
 Loss of productivity
 Then frequency with which a threat is expected to occur will be calculated. Here we will be
calculating the ARO.
 Derived from historical data, statistical data, etc.
 Quick ARO Cheat Sheet:
 1-in-4 = .25
 1-in-10 = .1
 1-in-20 = .05
 1-in-50 = .02
 1-in-200 = .005
 1-in-500 = .002
 Determine the ALE by formula (ALE = SLE x ARO), OR (ALE = AV x EF x ARO)

Determine course of action:


Avoid the Risk – stop doing the thing or remove the thing from the network
Accept the Risk – you can’t fix it now, so the risk is accepted and monitored going forward
Reduce (Mitigate) the Risk – fix the issues or implement compensating controls
Transfer the Risk – risk transference typically = insurance
 Risk left over after applying countermeasures is Residual Risk. Total Risk is the risk a
company faces if they accept risk. The difference between the two is Control Gap.
 Total risk – controls gap = residual risk (risk leftover after applying safeguards)
 TCO – total cost of ownership of countermeasures. If TCO is lower than ALE, you have a
positive Return on Investment (ROI), if not then accept the risk.

 Qualitative Analysis: does not attempt to assign numeric values at all, but rather is scenario
oriented Uses more guess work, approximate values or measurements, such as HIGH/MED/LOW
 Based more on softer metrics such as opinions, rather than numbers and historical
data
 Qualitative techniques and methods include brainstorming, focus groups, checklists,
Delphi Technique, etc.
Delphi Technique – The Delphi Technique is a method used to estimate the likelihood and outcome
of future events. A group of experts exchange views, and each independently gives estimates and
assumptions to a facilitator who reviews the data and issues a summary report. The group members
discuss and review the summary report, and give updated forecasts to the facilitator, who again
reviews the material and issues a second report. This process continues until all participants reach a
consensus. The experts at each round have a full record of what forecasts other experts have made,
but they do not know who made which forecast. Anonymity allows the experts to express their
opinions freely, encourages openness and avoids admitting errors by revising earlier forecasts
anonymous surveys. Issuer can’t see identity of who made statements
 Safeguard Evaluation:
 ALE (before safeguard) – ALE (after implementing safeguard) – annual cost of
safeguard = value to the company
 The value should not be negative. If it is, the cost of protecting an asset is more than
the asset itself.

Understand and apply threat modeling concepts and methodologies

 Threat Modeling is - where potential threats are identified, categorized and analyzed. Includes
frameworks like STRIDE and DREAD
 Microsoft STRIDE Threat Categorization: Spoofing, Tampering, Repudiation, Information
Disclosure, Denial of Service, Elevation of privilege
 DREAD
 Damage – how bad is the attack?
 Reproducibility – how easy to reproduce attack?
 Exploitability – how hard to launch attack?
 Affected users – how many people would be affected?
 Discoverability – how easy to find the threat?
Each risk factor for a given threat can be given a score (for example, 1 to 10). The sum of all
the factors divided by the number of factors represents the overall level of risk for the threat. A
higher score signifies a higher level of risk and would typically be given a higher priority when
determining which threats should be focused on first.
 Types of attackers: Hacker - Black, White, and Gray
Outsiders – make up 48-62% of attackers. Means 28-52% of attackers are internal, either maliciously
or unintentionally. Insider threats are always greater threats as compare to outside threats.
 Script Kiddies – little or no coding knowledge, but have knowledge or access to hacking tools.
Just as dangerous as skilled hackers.
 Hacktivists – socially motivated, political motivation. Include organizations like Anonymous
who are known for DDoS-ing Visa, Mastercard and PayPal to protest the arrest of Julian
Assange. Often aim to assure free speech, etc
 State-Sponsored Hackers – often see attacks occurring during normal work hours. Essentially
a day job. 120 countries have been using internet as a weapon. Include attacks like Sony (N.
Korea), Stuxnet (US/Israel), etc
 Bots/Botnets
Network Attacks

 Malware propagates through four main techniques:


 File infection
 Service injection
 Boot sector infection
 Macro infection
 Anti-malware detects foul play primarily through signatures and heuristic-based methods
 LOKI – ICMP traffic with commands hidden in it. Not so effective in 2019
 Smurf Attack – Type of DDoS attack. ICMP packets with spoofed IP. The responses are all
redirected to the victim
 Fraggle Attack – Similar to Smurf attack, but uses UDP.
 SYN Flood - succession of TCP SYN requests without ever completing the 3 way handshake. Goal
is to consume all a server’s available memory.
 LAND Attack – packet with the same source and destination address
 Tear Drop – overlapping fragments, causing OS to get confused and crash.
 Patch OS
 Use a firewall that does fragment re-assembly
 Replay Attack – traffic is intercepted in a MITM attack, and resubmitted at a later time. Time
stamping messages is a simple countermeasure.

Apply risk-based management concepts to the supply chain

Supply Chain – this refers to the flow of assets or data. Audits, surveys, reviews, and testing can be
done in the supply chain, but the CBK says that it’s also acceptable to simply view the resulting
reports of those reviews for entities within the supply chain, and recommend enhanced or reduced
security to those entities.
For example, if your business contracts with IBM for custom computer parts, and there is an
intermediary company that delivers those parts especially for you, they may be subject to certain
types of audits or reviews. By reviewing their findings, you can discuss additional or more effective
approaches to security.

CVE is the SCAP component that provides a naming system that describes security vulnerabilities.
SCAP is a National Institute of Standards and Technology (NIST) standard protocol that provides
common sets of criteria for defining and assessing security vulnerabilities. Thus SCAP can be used
to ensure that correct information flows between organizations and between automated processes.
SCAP version 1.0 includes all of the following components:
l CVE, which defines the naming system for describing vulnerabilities
l CVSS, which defines a standard scoring system for vulnerability severity
l CCE, which defines a naming system for system configuration problems
l CPE, which defines an operating system, application, and device naming system
l XCCDF, which defines a language format for security checklists
l OVAL, which defines a language format for security testing procedures
Applications that conform to SCAP can therefore scan and score systems according to a standard
set of criteria. In addition, these applications can communicate with one another in a standard
fashion.
Domain 2: Asset Security
Identify and classify information and assets

 Classifying Data
 Labels – objects have labels assigned to them. Examples include Top Secret, Secret,
Unclassified etc, but are often much more granular. Sensitive data should be marked with a
label.
 Clearance - assigned to subjects. Determination of a subject’s trustworthiness
 Declassification is required once data no longer warrants the protection of its sensitivity level
 Military Classifications:
 Top Secret – Classified. Grave damage to national security
 Secret – Classified. Serious damage to national security
 Confidential – Classified. Damage to national security.
 Sensitive (but unclassified)
 Unclassified
 Private Sector Classifications
 Confidential
 Private – confidential regarding PII
 Sensitive
 Public
Creating Data Classification Procedures

1. Set the criteria for classifying the data.


2. Determine the security controls that will be associated with the classification.
3. Identify the data owner who will set the classification of the data.
4. Document any exceptions that might be required for the security of this data.
5. Determine how the custody of the data can be transferred.
6. Create criteria for declassifying information.
7. Add this information to the security awareness and training programs so users can understand
their responsibilities in handling data at various classifications.

Determine and maintain information and asset ownership

Protect Privacy

 Senior Management is ULTIMATELY responsible for data security.


 Data Owner is an employee responsible for ensuring data is protected. Determine things such as
labeling and frequency of backups. This is different from the Owner in a Discretionary Access
Control system
 Data Custodian – performs the day-to-day work. They do the patching, backups, configuration, etc.
Do not make any critical decisions regarding how data is protected.
 System Owner – responsible for computers that house the data. Must make sure systems are
physically secure, patched, hardened, etc. from a decision-making perspective. Actual work is
delegated to system custodians
 Users – must comply with policies/procedures/standards. Must be trained, made aware of risk, etc
 Data Controllers – create sensitive data, and manage it. Think HR
 Data Processors – manage data on behalf of the controllers, such as an outsourced payroll
company
 Outsourcing is going to a third party. Offshoring is outsourcing to another country
 Data Collection Limitation – organizations should collect the minimum amount of sensitive data that
is required.
 Data Retention – data should not be kept beyond the period of usefulness or beyond legal
requirements. This is to reduce amount of work needed to be done in the event of an audit. If you
hoard data, you’ll likely have to go through all of it.
 Organizations that believe they may become the target of a lawsuit have a duty to preserve
digital evidence in a process known as eDiscovery. This includes information governance,
identification, preservation, collection, processing, review, analysis and presentation activities.
 You must always know where your data is. Outsourcing agreements must contain rules for
subcontractor access to data. Offshoring agreements must account for relevant laws and
regulations.
 Ensure contractors, vendors and consultants have clear expectations in the Service Level
Agreement (SLA). Examples may include maximum downtime allowed by vendor, and
penalties for failing to deliver on these.

Ensure appropriate asset retention

 Data remanence is the residual physical representation of data that has been in some way erased.
After storage media is erased there may be some physical characteristics that allow data to be
reconstructed. Data remanence plays major role when storage media is erased for the purposes of
reuse or release. It also refers to data remaining on storage after imperfect attempts to erase it
 Happens on magnetic drives, flash drives and SSDs
 RAM is volatile and is lost if device is turned off. ROM is not.
 Cold Boot Attacks freeze RAM to make it last after powering down for 30 minutes or so
 Flash memory is written by sectors, not byte-by-byte. Thumb drives and SSDs are examples.
Blocks are virtual, compared to HDD’s physical blocks. Bad blocks are silently replaced by SSD
controller, and empty blocks are erased by a “garbage collection” process. Erase commands on an
SSD can’t be verified for successful completion, and makes no attempt to clean bad blocks. The
only way to verify there are no data remnants is to destroy the drive. Alternatively, you can encrypt
the drive before it is ever used.
 Read-Only Memory is nonvolatile and can’t be written to by end user
 Users can write to PROM only once
 EPROM/UVEPROM can be erased through the use of ultraviolet light
 EEPROM chips may be erased with electrical current
 Data Destruction Methods:
 Erasing – Overwriting the data. This is not a preferred method.
 Sanitizing – the removal of storage media. Useful for when you are selling the hardware
 Purging – most effective method if you can’t or won’t physically destroy drive.
 Destroying – the most effective method of data destruction
 Degaussing – Uses magnetic fields to wipe data. Only works against mechanical drives, not,
not SSDs
 Crypto-Shredding – Crypto-shredding is the deliberate destruction of all encryption keys for
the data; effectively destroying the data until the encryption protocol used is (theoretically,
some day) broken or capable of being brute-forced. This is sufficient for nearly every use
case in a private enterprise, but shouldn’t be considered acceptable for highly sensitive
government data. Encryption tools must have this as a specific feature to absolutely ensure
that the keys are unrecoverable. Crypto-shredding is an effective technique for the cloud
since it ensures that any data in archival storage that’s outside your physical control is also
destroyed once you make the keys unavailable. If all data is encrypted with a single key, to
crypto-shred you’ll need to rotate the key for active storage, then shred the “old” key, which
will render archived data inaccessible.
We don’t mean to oversimplify this option – if your cloud provider can’t rotate your keys or
ensure key deletion, crypto-shredding isn’t realistic. If you manage your own keys, it should
be an important part of your strategy. It is very effective in all the options if done correctly.
This is the only reliable option for erasing data from cloud provider space and always best
option if it is given as an option in exam.

 Drive Encryption – protects data at rest, even after a physical breach. Recommended for mobile
devices/media. Whole disk encryption is preferred over file encryption. Breach notification laws
exclude lost encrypted data. Backup data should be stored offsite. Informal processes, such as
storing media at an employee’s house, should be avoided.

Determine data security controls

 Controls – countermeasures put into place to mitigate risk. Combined to produce defense in depth.
Several categories
 Administrative – policies, procedures, regulations
 Technical – software, hardware or firmware
 Physical – locks, security guards, etc.
 Physical security affects all other aspects of an organization
 Alarm types include:
 Deterrent
 Repellent
 Notification
 There are no preventive alarms because they always trigger in response to something after
the fact
 Centralized Alarms alert remote monitoring stations
 Locks are the most common and inexpensive type of physical controls
 Lighting is the most common physical control

Several types of controls:


Preventive – stop a security incident or information breach, prevents actions, such as
permissions, fences, firewalls, drug-screen
Detective – signal a warning when a security control has been breached such as alerts
during or after an attack, like video surveillance, IDS, post-employment drug tests
Corrective – remedy circumstance, mitigate damage, or restore controls.Fixes damage to a
system. Antivirus,
Recovery – Restore conditions to normal after a security incident. Restores functionality, such
as restoring backups
Deterrent –Discourages an attack or people from violating security directives, such as a no-
trespassing sign, legal disclosure banner, sanction policy
Directive – specify acceptable rules of behavior within an organization, encourages or forces
actions of subjects, such as compliance or regulations. Examples could also include escape
route signs, supervision, monitoring and procedures.
Compensating – substitute for the loss of primary controls and mitigate risk down to an
acceptable level, additional control to make up for a weakness in other controls, such as
reviewing logs to detect violations of a computer use policy

 Order of Controls go Deter > Deny > Detect > Delay


 Taking ownership of an object overrides other forms of access control
Domain 3: Security Architecture and Engineering
Implement and manage engineering processes using secure design principles

 The Kernel is the heart of the operating system, which usually runs in Ring 0. It provides an
interface between hardware and the rest of the OS.
 Models:
Ring Model – Rings are arranged in a hierarchy from most privileged (most trusted, usually
numbered zero) to least privileged (least trusted, usually with the highest ring number). On
most operating systems, Ring 0 is the level with the most privileges and interacts most
directly with the physical hardware such as the CPU and memory. separates users
(untrusted) from the kernel (trusted).

Hypervisor Mode – Closely related to virtual memory are virtual machines, such asVMWare,
VirtualBox, and VirtualPC. VMWare and VirtualPC are the two leading contenders in this
category. A virtual machine enables the user to run a second OS within a virtual host. For
example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS
that runs on x86 processor and supports standard BIOS booting. Virtual systems make use
of a hypervisor to manage the virtualized hardware resources to the guest operating system.
A Type 1 hypervisor runs directly on the hardware with VM resources provided by the
hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the
hardware. Virtual machines are a huge trend and can be used for development and system
administration, production, and to reduce the number of physical devices needed. The
hypervisor is also being used to design virtual switches, routers, and firewalls. Also called
Ring -1 (“minus 1”). Allows virtual guests to operate in ring 0.
 Open Systems are hardware and software that use public standards.
 A Closed System uses proprietary standards.
 The CPU is the heart of the computer system. The CPU consists of the following:

 An arithmetic logic unit (ALU) that performs arithmetic and logical operations
 A control unit that extracts instructions from memory and decodes and executes the
requested instructions
 Memory, used to hold instructions and data to be processed
 Two basic designs of CPUs are manufactured for modern computer systems:

1. Reduced Instruction Set Computing (RISC)—Uses simple instructions that require a reduced
number of clock cycles.
2. Complex Instruction Set Computing (CISC)—Performs multiple operations for a single
instruction.

CPUs fetch machine language code and executes them. Below four steps take one clock cycle to
complete.
1. Fetch Instructions
2. Decode instructions
3. Execute instructions
4. Write (save) results

 Pipelining combines multiple steps into one process. Allows simultaneous


fetch/decode/execute/write steps for different instructions.
 Multitasking – Simply being able to do several things at once
 Multithreading permits multiple concurrent tasks to be performed within a single process.
 Multiprocessing – requires more than one CPU or cores.
 Multiprogramming is like multitasking but takes place on mainframe systems and requires
specific programming
 Symmetric multiprocessing uses one OS to manage all CPUs
 Asymmetric multiprocessing systems have one OS image per CPU
 Single-state processors are capable of operating at only one security level at a time. A multistate
processor can operate at multiple levels simultaneously.
 A superscalar processor is one that can execute multiple instructions at the same time, whereas
a scalar processor can execute only one instruction at a time. You will need to know this
distinction for the exam.
 Interrupts can be maskable and non-maskable. Maskable interrupts can be ignored by the
application, whereas nonmaskable interrupts cannot

Memory Protection

 One process cannot affect another. They’ll be sharing the same hardware, but memory allotted
for one process should not be able to manipulate the memory allotted for another.
 Virtual memory – provides isolation and allows swapping of pages in and out of RAM.
 Swapping – moves entire processes from primary memory (RAM) from or to secondary
memory (disk)
 Paging – copies a block from primary memory from or to secondary memory.
 WORM Storage - “write once, read many”. Ensures integrity as data cannot be altered after first
write. Examples include CD-R and DVD-R
 Two important security concepts associated with storage are protected memory and memory
addressing. For the exam, you should understand that protected memory prevents other
programs or processes from gaining access or modifying the contents of address space that’s
has previously been assigned to another active program. Memory can be addressed either
physically or logically. Memory addressing describes the method used by the CPU to access the
contents of memory.

Secure Hardware

 Trusted Platform Module – hardware crypto-processor. Generates, stores and limits the use of
crrypto keys. Literally a chip on a motherboard. Commonly used to ensure boot integrity.
 Data Execution Prevention – areas of RAM marked Non-Executable (NX bit). Prevents simple
buffer overflows.
 Address Space Layout Randomization – each process is randomly located in RAM. Makes it
difficult for attacker to find code that has been injected.
 Reference Monitor – mediates access between subjects and objects. Enforces security policy.
Cannot be bypassed.
 Trusted Computing Base – combination of HW, SW and controls that work together to form a
trusted base that enforces your security policies. TCB is generally the OS Kernel, but can include
things like configuration files.
 Security Perimeter – separates the TCB from the rest of the system. The TCB communicates
through secure channels called trusted paths

Common Criteria includes six key concepts like:

TOE - Target of Evaluation:


PP - Protection Profile: Identifies security requirements for classes of devices like smart cards, or
network firewalls.
ST - Security Target: Identifies the security properties of the subject of evaluation. Differentiates
needs between items like database or firewalls.
SFR - Security Functional Requirements: Determines how a user in a role might be authenticated.
SAR - Security Assurance Requirements: Describes the measures taken during development and
evaluation to assure compliance with security functionality.
EAL - Evaluation Assurance Level: describes the depth and rigor of an evaluation and is assigned a
number between 1 and 7; 1 being most basic, 7 being advanced testing
Exam Tip: Ensure you know the seven levels of Common Criteria for your exam.
CC has seven assurance levels, which range from EAL1 (lowest), where functionality testing takes
place, through EAL7 (highest), where thorough testing is performed and the system design is
verified:
EAL1: Functionally tested
EAL2: Structurally tested
EAL3: Methodically tested and checked
EAL4: Methodically designed, tested, and reviewed
EAL5: Semi-formally designed and tested
EAL6: Semi-formally verified design and tested
EAL7: Formally verified design and tested

Understand the fundamental concepts of security models

 Bell-LaPadula Model – Developed by DoD, focuses on maintaining confidentiality. First


mathematically proven model. State Machine Model – every possible interaction between subjects
and objects is included in its state. If every possible state is secure, the system is proven to be
secure.
 Lattice-based Access Controls -think “ladder”. Subjects have a least upper bound and a greatest
lower bound of access. Highest level of access is “Alpha, Beta, Gamma
 Biba Model – Concerned only with integrity. Two rules:

 These rules are exact opposite of the Bell-LaPadula Model model


 If a high ranking subject issues data, everyone is able to trust that data. If a low ranking subject
issues some sort of data, no one above that subject has permission to trust it.
 Clark-Wilson – another integrity model. “Subjects must access objects through programs. Cannot
access data directly.” Includes separation of duties.
 Often used by commercial applications
 Informatiom Flow Model – limits how info flows in a system. Biba and B-LP are both examples of
this

 Brewer Nash / Chinese Wall – protects against conflict of interest.


 If an accounting firm processes financial data for Company A and B, they cannot access B’s
company while working on A.
 Take-Grant - four rules:
 Take
 Grant
 Create
 Remove

 These privileges are spread across different subjects, so it almost acts as Separation of
duties.
 Graham-Denning Model – defines a set of basic rights in terms of commands that a subject can
execute on an object. Three parts: objects, subjects and rules.
 Rules:
 Transfer Access
 Grant Access
 Delete Access
 Read Object
 Create Object
 Destroy Object
 Create Subject
 Destroy Subject
 Harrison-Rizzo-Ullman Model – like Graham-Denning, but treats subjects and objects as the same
and only has 6 rules:
 Create object
 Create subject
 Destroy subject
 Enter right into access matrix
 Delete right from access matrix
 Non-Interference Model – ensures that commands and activities at one level are not visible to other
levels. For example, prior to the Gulf War, the Pentagon ordered a huge amount of pizza and
people were able to assume something was going on. The war started shortly after.

Visual Mnemonic for Security Models:-

 Access Control Matrix – describes the rights of every subject for every object in the system. An
access matrix is like an excel spreadsheet. The rows are the rights of each subject (called a
capability list), and the columns show the ACL for each object or application
 Zachman Framework for Enterprise Architecture – takes the Five W’s (and How), and maps them
to specific subjects or roles.

Scoping and Tailoring


Scoping ensures an adequate level of protection by identifying the security requirements based on
the organization’s mission and business processes as supported by the information system.
Scoping guidance is a method which provides an enterprise with specific terms and conditions on
the applicability and implementation of individual security controls.
Many considerations can potentially impact how baseline security controls are applied by the
enterprise. System security plans must clearly define which security controls employ scoping
guidance, and include a description of the considerations taken into account. For an information
system, the authorizing official must review and approve the applied scoping guidance.
The tailoring process involves the customization of the initial security control baseline. The baseline
is adjusted to align security control requirements more closely with the actual information system
and/or operating environment.
Tailoring uses the following mechanisms.
 Scoping guidance, which defines specific terms and conditions on the applicability and
implementation of specific security controls.
 Compensating security controls, which includes management, operational, and technical
controls implemented instead of the security controls identified in the initial baseline.
 Organization-defined parameters, which are applied to portions of security controls to
support specific organizational requirements and objectives.
The security practitioner must understand the impact of scoping and tailoring on information
security. Tailoring and scoping allow administrators to customize baselines to their needs.
 Scoping is the process of determining which portions of a standard an organization will use. If
there’s no wireless, wireless encryption standards are out of scope
 Tailoring is customizing the standard for an organization to address their specific technologies

Understand security capabilities of information systems

Security Modes of Operational

 Dedicated – system contains objects of only one classification level. All subjects are cleared for that
level or higher. All subjects have access approval and need to know for all info stored/processed on
that system
 System High – system contains objects of mixed labels (confidential, secret, top secret). All users
have appropriate clearances and access permissions for all info processed by a system, but they
don’t necessarily need to know all the info processed by that system. Provides the most granular
control over resources compared to the other security models.
 Compartmented – All subjects accessing the system have necessary clearance, but do not have
formal access approval or need to know for ALL info on the system. Technical controls enforce
need to know for access.
 Multilevel – also known as label-based. Allows you to classify objects and users with security
labels. A “reference monitor” controls access. If a top-secret subject attempts to access a top-
secret object, access is granted. By the reference monitor.

Assess and mitigate the vulnerabilities of security architectures, designs and solution
elements

Cloud Computing

 Cloud deployment approaches:


o Private Cloud – organizations building their own cloud infrastructure
o Public – AWS, Azure
o Hybrid – any mixture of the two. Maybe sensitive data is placed on private cloud, and non-
sensitive is placed on public
o Community – provides cloud-based assets to two or more organizations. Maintenance
responsibilities are shared between the organizations.
Cloud Computing Services:

 SaaS – Software-as-a-Service. provided through a browser, such as web mail.


 PaaS – Platform-as-a-Service. Provides customers with a pre-configured computing platform. The
vendor manages the underlying platform and all maintenance associated with it.
 IaaS – Infrastructure-as-a-Service. You’re just renting physical equipment. Client is responsible for
their data, patching and updating everything, etc.
 The hierarchy, rated from the least amount of customer responsibility to the most, is SaaS >
PaaS > IaaS
 Peer-to-peer – sharing between many systems, such as BitTorrent. It’s decentralized and so
integrity is questionable.

IoT
 Mitigating IoT Vulnerabilities:
 Keep IoT devices on another network
 Turn off network functionality when not needed
 Apply security patches whenever possible
 Protecting mobile devices:
 Encrypt devices
 Enable screen locks and GPS for remote wipe

Quality = Degree of excellence against a standard


Ability = Flexible and go through change rapidly
Flexibility = Ease of modification in application/environment
Interoperability = Ability of two systems/components to exchange data
Maintainability = Capability of a system to go through repair
Performance = Responsiveness of a system
Reliability = Ability to keep operating over time
Reusability = Can be used in more than one software system
Scalability = Ability to improve performance while system demand increases
Supportability = Ease of operational maintenance.
Testability = Degree to the establishment of test criteria
Usability = Measure of system effective utilization
Security = Measure of resistance to unauthorized attempts at usage
SCADA
 Used to control industrial equipment such as power plants, heating, prison doors, elevators, etc
 Terms:
 Supervisory control system – gathers data and sends commands
 Remote Terminal Unit (RTU) – connects devices to SCADA network. Converts analog to
digital
 Human Machine Interface – presents data to the operator
 Often older and have security issues. Typically, they are air gapped (no direct connection to outside
world).

Databases

 Polyinstantiation - Two rows may have the same primary key, but different data for each clearance
level. Top secret clearance subjects see all data. Secret clearance subjects see only the data they
are cleared for. This prevents unprivileged users from assuming data based on what they notice is
missing.
 Data mining – searching a large database for useful info. Can be good or bad. For example, a
credit card company may mine transaction records to find suspicious transactions and detect fraud.
 Data Analytics is understanding normal use cases to help detect insider threats or other
compromises.
 Data Diddling – altering existing data, usually as it’s being entered.
Assess and mitigate vulnerabilities in web-based systems

 Covert Channels – communications that violate security policies.


 Backdoors – system shortcut to bypass security checks. Could be planted by attackers, or included
by developers (called maintenance hooks).
 Java Applets run in web browsers to make a web session more functional and interactive. ActiveX
is similar, but only for Microsoft products. Implemented digital certificates for security

Fire suppression
Not all fire is equal. There are four types of fire and each requires a certain type of suppression
device. Use the wrong one, and you could end up making the fire bigger/stronger rather than
extinguishing it. The table below summarizes the classes of fire and the appropriate suppression
method.

Fire Class Type of Fire Elements of Fire Suppression Method

Wood products,
Common
A paper, and Water, foam
combustible
laminates

Petroleum products Gas, CO₂, foam, dry


B Liquid
and coolants powders

Electrical
C Electrical equipment and Gas, CO₂, dry powders
wires

Combustible Magnesium,
D Dry powder
metals sodium, potassium

Foams are mainly water-based and are designed to float on top of a fire and prevent any oxygen
from flowing to it. Gas, such as halon or FM-200, mixes with the fire to extinguish it and is not
harmful to computer equipment. Halon has been found to damage the atmosphere, however, and is
no longer produced. CO₂ gas removes the oxygen from the air to suppress the fire. It is important
that if CO₂ is used there be an adequate warning time before disbursement. Because it removes the
oxygen from the air it could potentially endanger people's lives. CO₂ is often used in unmanned
facilities for this reason. Dry powders include sodium or potassium bicarbonate, calcium carbonate,
or monoammonium phosphate. The first three interrupt the combustion of a fire, and
monoammonium phosphate melts at a low temperature and smothers the fire.

Water sprinkler systems are much simpler to install than any of the above systems but can cause
damage to computer and electrical systems. It's important that an organization recognize which
areas require which types of suppression systems. If water is to be used in an environment that
contains electrical components, it's important that the electricity be shut off first. Systems can be
configured to shut off all electrical equipment before water is released. There are four main types of
water systems:

Wet pipe - Wet pipe systems always contain water within the pipe and are usually controlled by a
temperature sensor. Disadvantages of these systems include that the water in the pipe may freeze,
and damage to the pipe or nozzle could result in extensive water damage.
Dry pipe - Dry pipe systems employ a reservoir to hold the water before deployment, leaving the
pipes empty. When a temperature sensor detects a fire, water will be released to fill the pipes. This
type of system is best for cold climates where freezing temperatures are an issue.
Preaction - Preaction systems operate similarly to dry pipe systems but add an extra step. When
empty, the pipes are filled with pressurized air. If pressure is lost, water will fill the pipes, but not be
dispersed. There is a thermal-fusible link on the nozzle that must first melt away before the water
can be released. The advantage of this system is it gives people more time to react to false alarms
and small fires. It is much more effective to put out a small fire with a hand-held extinguisher than a
full spray system.
Deluge - Deluge systems have their sprinkler heads turned all the way open so that greater amounts
of water can be released at once. These types of systems are not usually used in data centers
because of this.

 OWASP Top 10
Assess and mitigate vulnerabilities in mobile systems

Mobile systems include the operating systems and applications on smartphones, tablets, phablets,
smart watches, and wearables. The most popular operating system platforms for mobile systems are
Apple iOS, Android, and Windows 10.
The vulnerabilities that are found on mobile systems include
 Lack of robust resource access controls. History has shown us that some mobile OSs lack
robust controls that govern which apps are permitted to access resources on the mobile
device, including:
 Locally stored data
 Contact list
 Camera roll
 Email messages
 Location services
 Camera
 Microphone
 Insufficient security screening of applications. Some mobile platform environments are quite
good at screening out applications that contain security flaws or outright break the rules, but
other platforms have more of an “anything goes” policy, apparently. The result is buyer
beware: Your mobile app may be doing more than advertised.
 Security settings defaults too lax. Many mobile platforms lack enforcement of basic security
and, for example, don’t require devices to automatically lock or have lock codes.
In a managed corporate environment, the use of a mobile device management (MDM) system can
mitigate many or all of these risks. For individual users, mitigation is up to individual users to do the
right thing and use strong security settings.
Assess and mitigate vulnerabilities in embedded devices:

Embedded devices encompass the wide variety of systems and devices that are Internet connected.
Mainly, we’re talking about devices that are not human connected in the computing sense. Examples
of such devices include
 Automobiles and other vehicles.
 Home appliances, such as clothes washers and dryers, ranges and ovens, refrigerators,
thermostats, televisions, video games, video surveillance systems, and home automation
systems.
 Medical care devices, such as IV infusion pumps and patient monitoring.
 Heating, ventilation, and air conditioning (HVAC) systems.
 Commercial video surveillance and key card systems.
 Automated payment kiosks, fuel pumps, and automated teller machines (ATMs).
 Network devices such as routers, switches, modems, firewalls, and so on.
These devices often run embedded systems, which are specialized operating systems designed to
run on devices lacking computer-like human interaction through a keyboard or display. They still
have an operating system that is very similar to that found on endpoints like laptops and mobile
devices.
Some of the design defects in this class of device include:
 Lack of a security patching mechanism. Most of these devices utterly lack any means for
remediating security defects that are found after manufacture.
 Lack of anti-malware mechanisms. Most of these devices have no built-in defenses at all.
They’re completely defenseless against attack by an intruder.
 Lack of robust authentication. Many of these devices have simple, easily-guessed default
login credentials that cannot be changed (or, at best, are rarely changed by their owners).
 Lack of monitoring capabilities. Many of these devices lack any means for sending security
and event alerts.
Because the majority of these devices cannot be altered, mitigation of these defects typically
involves isolation of these devices on separate, heavily guarded networks that have tools in place to
detect and block attacks

Apply cryptography
 Cryptography is the science of encrypting information. The work function/factor of a cryptosystem is
the measure of its strength in terms of cost and time to decrpt messages. The work function is
generally rated by how long it takes to brute-force the cryptosystem.
 Encryption is the act of rendering data unintelligible to unauthorized subjects

 Algorithms and keys work together to encrypt and decrypt information. In the above example, 13 is
the key and ROT is the algorithm. In modern cryptography, algorithms don’t change often but keys
should every time
 Confusion – relationship between plaintext and ciphertext should be as random as possible
 Substitution replaces one character for another
 Permutation provides confusion by rearranging the characters of the plaintext (such as ROT 13)
 Substitution and permutation are often combined.
 Monoalphabetic ciphers use one alphabet, meaning letter E is always substituted with the same
letter every time. These ciphers are susceptible to frequency analysis
 Polyalphabetic ciphers use multiple alphabets.
 Cryptosystem Development Concepts:
 Algorithms should be available for review, however the key should always be secret. This is to
avoid security by obscurity
 Kirchhoff’s Principal – Idea that algorithm should be available for public review
 Exclusive-OR (XOR) Operation - Logical, binary operation which adds two bits together. Plaintext is
XORed with a random keystream to generate ciphertext
 If values are same, result is 0
 If values are different, result is 1
 Zero-proof Knowledge is the concept that you can prove your knowledge of a fact to a third party
without revealing the fact itself. This is the case with digital signatures and certificates. This is
illustrated by the “Magic Door”, where user A uses a secret password to open a door to get to user
B without having to tell user B the password:
 Cryptography History:
 Caesar Cipher – used simple substitution where characters where shifted 3 spaces
 Scytale – used by Spartans. Wrapped tape around a rod. The message was on the tape and
the key was the diameter of the rod.
 Vigenere Cipher – polyalphabetic cipher. Alphabet is repeated 26 times to form a matrix
(Vigenere Square).

 One-Time Pad – if done correctly, it’s mathematically unbreakable. Sender and recipient must
have a pad with pages full of random letters. Each page is used only once. The only way to
break it is to steal or copy the pad. They key must be at least as long as the message to be
encrypted.
 Key distribution is burdensome. Very hard to do correctly in terms of getting random numbers

Symmetric Encryption

The equation for determining how many keys are required for symmetric communications is:

n*(n-1)
----------
2
 Uses the same key both to encrypt and decrypt a message. This makes it very fast compared to
asymmetric encryption. May be referred to as “secret key” or “shared key” cryptography.
 A session key is a temporary symmetric key used for a connection
 Problems:
 Difficult to securely exchange the keys
 Not scalable. The number of keys you need grows exponentially as you add additional people
to the communication link
 Stream vs Block Ciphers - In a stream cipher, data is encrypted bit-by-bit. Usually
implemented in hardware and requires no memory. Not used anymore because it’s likely for
patterns to emerge. Block ciphers group communications into blocks and encrypt those all
together. This alleviates the issue of patterns in stream ciphers because an attacker would
never know if a block is one word or ten words.
 Block ciphers are usually implemented in software and requires a lot of memory.
 Uses substitution and transposition (rearranges order of plaintext symbols) ciphers
 Initialization Vector (IV) – a random value added to the plaintext before encryption. Used to ensure
that two identical plaintext messages don’t encrypt to the same ciphertext
 Chaining – uses the result of one block to “seed”, or add to, the next block
 Common Symmetric Algorithms:
 DES – Uses a single 56-bit key. Brute forced two decades ago. No longer safe to use.
 3DES – Three rounds of DES encryption using two or three different 56-bit keys. Key length is
112 bits or more depending on how many keys you use. Considered secure, but slower to
compute than AES
 Two keys are required at a minimum to achieve strong security
Modes of DES
 Electronic Code Book (ECB) – Each block encrypted independently. Decrypting starts at
beginning of ciphertext. Processes 64-bits at a time using the same key. A given message will
always produce the same ciphertext. Creates patterns and is susceptible to frequency analysis.
This is the weakest DES mode.
 Cipher Block Chaining (CBC) – most common mode. Uses an IV to seed first block. The
ciphertext is then XORed. Problematic because errors in earlier blocks will propagate
throughout the rest of the encrypted data.
 Cipher Feedback (CFB) -
 Output Feedback (OFB) -
 Counter Mode (CTR) – Works in parallel mode. Fixes the issue of propagating errors
Visual mnemonics for DES Modes:

AES – Three key lengths: 128, 192 and 256-bits. Current US recommended standard. Open

algorithm, patent free (anyone can use it)
 RC4 – Key length of 8-2048 bits. Was used in SSL and WEP communication
 Blowfish – Developed by Bruce Schneider. key sizes 32-448 bit. Faster than AES.
 Two Fish - Developed by Bruce Schneider. key size 128-256 bit. Better than Blowfish. Uses a
process known as prewhitening to XOR plaintext with a separate subkey before encryption
 IDEA – block algorithm using 128-bit key and 64-bit block size. Patented in many countries.
 “PGP is a good IDEA” - PGP is often used in conjunction with IDEA
 Be able to look at a list of algorithms and pick out the symmetric ones. Think of the word FISHES.
A fish is symmetric. Anything with FISH or ES in the name is symmetric, as are RC4, Skipjack, and
IDEA.

Asymmetric Encryption

The equation for determining how many keys are required for asymmetric communications is:
2n

 Designed to solve the key exchange problem of symmetric encryption. Each user has a public and
a private key. The public is made available, but the private key is kept secret
 Far slower than symmetric encryption and is weaker per key bit. 512-bit public key is roughly
equivalent to a 64-bit symmetric key.
 One-Way Functions
 must be a way to calculate a public key from a private key, but impossible to deduce the
private key from the public key.
 Works by multiplying prime numbers. Calculating the product of those two prime numbers, but
determining which prime numbers achieved the product is much more difficult.
 Asymmetric Algorithms
 Diffie-Hellman – original asymmetric algorithm. Allowed two parties to agree on a symmetric
key via a public channel. Based on “difficulty of calculating discrete logarithms in a finite
field”.
 RSA – based on factoring large prime numbers. Key length and block size of 512-4096. 100
times slower than symmetric encryption. Only requires two keys for any given
communication and is ideal for large environments with a low amount of time required for key
management
 DSA – used for digital signatures
 El-Gamal – extension of Diffie-Hellman that depends on modular arithmetic
 Biggest disadvantage is that it doubles the length of messages
 Elliptic Curve Cryptography (ECC) – faster than other asymmetric algorithms, so its used on
devices with less computing power, such as mobile devices. It is patented so it costs money
to use it. 256-bit ECC key is as strong as a 3,072-bit RSA key
 Elleptic Curve Diffie-Hellman Ephemeral (ECDHE) is associated with providing perfect forward
secrecy, a feature where communications cannot be broken even if the server is compromised.
 PGP – originally used IDEA. Can now use PKI. Uses a web of trust model to authenticate
digital

Hybrid

 SSL/TLS – client requests a secure connection to a server. Server sends the client its public key in
the form of a certificate. The client takes the public key and generates a one-time session key
(temporary symmetric key) and encrypt it with the server public key and send it back to the server.
The server uses their private key to decrypt the client’s private key. That symmetric key is now
used to encrypt all data exchanged between client and server.

Hashing

 Considered one way because there is no way to reverse a hash. Plaintext is “hashed” into a fixed-
length value (ie not variable), called a message digest or hash
 Hashing helps ensure integrity. If the content of a file changes, its hash will change.
 Hash Functions:MD5 is not used because it has been found to create collisions in the past. There
is more data in the world than there are possible 128-bit combinations
 HMAC uses a secret key in combination with a hash algorithm to verify that a hash has not
tampered with it. It computes the hash of a message plus a secret key
Digital Signatures

 When a digital signature is issued by a CA, they will include the recipient’s public key
 If an outside individual receives someone else’s digital cert, they will verify its authenticity
 by looking at the CA’s public key
 Provide authentication and integrity. Do not provide confidentiality.
 It will calculate the hash of a document it and encrypt it with your private key. Anyone can verify it
with your public key.
 Digital Signature Standard uses SHA-1, 2 and 3 message digest functions along with DSA, RSA
and Elliptic Curve algorithms.
 Digital Watermarks – encode data into a file. May be hidden using steganography.

As electronic signatures provide a receipt of the transaction in order to ensure that the entities that
participated in the transaction cannot repudiate their commitments.

For your exam you should know the information below:

The digital signature is used to achieve integrity, authenticity and non-repudiation. In a digital
signature, the sender's private key is used to encrypt the message digest of the message.
Encrypting the message digest is the act of Signing the message. The receiver will use the matching
public key of the sender to decrypt the Digital Signature using the sender's public key.

A digital signature (not to be confused with a digital certificate) is an electronic signature that can be
used to authenticate the identity of the sender of a message or the signer of a document, and
possibly to ensure that the original content of the message or document that has been sent is
unchanged. Digital signatures cannot be forged by someone else who does not possess the private
key, it can also be automatically time-stamped. The ability to ensure that the original signed
message arrived means that the sender cannot easily repudiate it later.

A digital signature can be used with any kind of message, whether it is encrypted or not, simply so
that the receiver can be sure of the sender's identity and that the message arrived intact. A digital
certificate contains the digital signature of the certificate-issuing authority so that anyone can verify
that the certificate is real and has not been modified since the day it was issued.

How Digital Signature Works

Assume you were going to send the draft of a contract to your lawyer in another town. You want to
give your lawyer the assurance that it was unchanged from what you sent and that it is really from
you.

1. You copy-and-paste the contract (it's a short one!) into an e-mail note.
2. Using special software, you obtain a message hash (mathematical summary) of the contract.
3. You then use a private key that you have previously obtained from a public-private key
authority to encrypt the hash.
4. The encrypted hash becomes your digital signature of the message. (Note that it will be
different each time you send a message.)

At the other end, your lawyer receives the message.

1. To make sure it's intact and from you, your lawyer makes a hash of the received message.
2. Your lawyer then uses your public key to decrypt the message hash or summary.
3. If the hashes match, the received message is valid.
Below are some common reasons for applying a digital signature to communications:

Authentication

Although messages may often include information about the entity sending a message, that
information may not be accurate. Digital signatures can be used to authenticate the source of
messages. The importance of high assurance in the sender authenticity is especially obvious in a
financial context. For example, suppose a bank's branch office sends instructions to the central
office requesting a change in the balance of an account. If the central office is not convinced that
such a message is truly sent from an authorized source, acting on such a request could be a serious
mistake.

Integrity

In many scenarios, the sender and receiver of a message may have a need for confidence that the
message has not been altered during transmission. Although encryption hides the contents of a
message, it may be possible to change an encrypted message without understanding it. (Some
encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a
message is digitally signed, any change in the message after the signature has been applied would
invalidates the signature. Furthermore, there is no efficient way to modify a message and its
signature to produce a new message with a valid signature, because this is still considered to be
computationally infeasible by most cryptographic hash functions (see collision resistance).

Non-repudiation

Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital


signatures. By this property, an entity that has signed some information cannot at a later time deny
having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a
valid signature.

Note that authentication, non-repudiation, and other properties rely on the secret key not having
been revoked prior to its usage. Public revocation of a key-pair is a required ability, else leaked
secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation
status requires an "online " check, e.g. checking a "Certificate Revocation List " or via the "Online
Certificate Status Protocol ". This is analogous to a vendor who receives credit-cards first checking
online with the credit-card issuer to find if a given card has been reported lost or stolen.

Tip for the exam

Digital Signature does not provide confidentiality. It provides only authenticity and integrity and non-
repudiation. The sender's private key is used to encrypt the message digest to calculate the digital
signature

Encryption provides only confidentiality. The receiver's public key or symmetric key is used for
encryption
Please remember :

 Electronic signatures do not prevent messages to be erased.


 Electronic signatures do not prevent messages to be disclosed.
 Electronic signatures do not prevent messages to be forwarded
Cryptography Attacks

 Brute force – tries every possible key. In theory it will always work with time, except against one-
time pads. If a key is long enough, however, it will take incredibly long amounts of time.
 Rainbow Tables – pre-computed tables of passwords and their hashes. Not practical for modern.
Effective against Windows LANMAN hashes
 Salts are random values added to the end of a password before hashing it to help protect
against rainbow table attacks. Salts are stored in the same database as the hashed password.
Salting can be accomplished by PBKDF2, bcrypt and scrypt. Unique salts should be generated
for each user.
 Peppers are large constant numbers used to further increase the security of the hashed
password, and are stored OUTSIDE the database that houses the hashed passwords.
 Known Plaintext
 Chosen Ciphertext
 Chosen Plaintext
 Collisions – when two different messages have the same hash value, such as the Birthday Attack.
If a room has 23 people in it, there is a significant chance that two people will have the same
birthday. Chances increase with the size of the environment.
 MD5 (128-bit) will have a collision after 2^64 calculations
 Known-Plain Text Attack – attempting to attain the key when you have the encrypted text and all or
some of the plain text.
 In WWII, the Germans and Japanese always started a transmission with a certain phrase. The
Allies knew this phrase and could record the encrypted message, and were able to break the code
 Chosen-Plaintext Attack – attacker has the ability to encrypt chosen portions of the plaintext and
compares it to the encrypted portion to discover the key
 Cipher-Text Only Attacks – Attacker collects many messages encrypted with the same key, and
uses statistical analysis to break the encryption
 Chosen Ciphertext Attack – cryptanalyst can choose the cipher text to be decrypted. Thus, they
have ciphertext and plaintext for messages that they choose.
 Known Key – attacker may have some knowledge about the key. This reduces the number of
variations they have to consider to guess the key (ex: Passwords must be 8-12 characters)
 Meet-in-the-middle Attack – Encrypts the plaintext using all possible keys and creates a table with
all possible results. Then they decrypt the plaintext using all possible keys. This is why 3DES is
used over 2DES.
 Side-Channel Attack – uses physical data to break cryptosystem, such as monitoring CPU cycles
or power consumption used while encrypting or decrypting. Longer keys may require more CPU
cycles for examination.Meltdown and Spectre are examples
 Implementation Attacks -exploit a vulnerability with the actual system used to perform the math.
System may leave plaintext in RAM, or the key may be left on the hard drive.

PKI

 Public Key Infrastructure. Provides a way to manage digital certificates, which is a public key
signed with a digital signature. The standard digital certificate format is X.509.
 A digital certificate will contain:
 Serial number
 ID info
 Digital signature of signer
 X.509 version
 Validity period
 A certificate will NOT contain “Receiver’s name”.
 The Certificate Authority creates certs to verified users. Examples include Verisign, DigiSign,
Comodo, GoDaddy, etc.
 Certificate Policy – set of rules dictating the circumstances under which a cert can be used. Used to
protect Cas from claims of loss if the cert is misused.
 Certificate Revocation List (CRL) – identifies certs that have been revoked due to theft, fraud,
change in information associated with the CA, etc. Expired certs are not on the CRL. DOES NOT
UPDATE INFO IN REAL TIME
 Online Certificate Status Protocol – used to query the CA as to the status of a cert issued by
that CA. Useful in large environments. Responds to a query with the status of valid,,
suspended or revoked
 There are several models:
 Hierarchical Trust – Root CA can delegate intermediate CAs to issue certs on its behalf.
 Web of Trust – All parties involved trust each other equally. No CA to certify certificate owners
 Hybrid-Cross Certification – Combination of hierarchical and mesh models. Common for when
two different organizations establish trust relationships. A trust is created between two Root
CAs, and each organization trusts the others’ certificates.
 Key Storage
 Placement of a copy of secret keys in a secure location. Two methods:
 Software-based
 Hardware-based -
 Key Escrow – keys needed to decrypt cyphertext are held in escrow so that, under certain
cirumstances, an authorized third party may gain access to those keys.
 Recovery Agent – has authority to remove lost/destroyed keys from escrow. Requires at least
TWO agents (M of N controls – a number (n) of agents must exist in an environment. Of those
agents, a minimum number (m) must work together to recover a key). This is an example of
Split Knowledge, where information or privilege required to perform an action is divied among
multiple users.

IPSec

 Ipsec is a security architecture framework that supports secure communication over IP. It
establishes a secure channel in either transport or tunnel mode, and is used primarily for VPNs.

 IPSec – two primary protocols


 Authentication Header (AH) – provides authentication and integrity for each packet. Acts as a
digital signature for data.
 Encapsulating Security Payload (ESP) – provides confidentiality with encryption
 Supporting protocols:
 Internet Key Exchange (IKE)
 IPSec has several modes:
 Tunnel mode – whole packet is encapsulated (encrypted), including the header, for gateway-
to-gateway communications. Connects networks to networks, or hosts to networks
 Transport mode – Only the payload is encapsulated, and is used primarily for peer-to-peer
communications. Connects hosts to hosts.
 Think of driving a car – when you drive through a tunnel, your entire car is encapsulated.
When outside of the tunnel, you are encapsulated by your car.
 Encapsulation is the addition of a header and footer to a packet. It enables multi-layer protocols.
 Security Association (SA) – one way connection. May be used to negotiate AH and/or ESP
parameters. If using ESP only, two SAs are required (one for each direction). If using AH and ESP,
4 SAs are required
 SA process is managed by the Internet Security Association of Key Management Protocol
(ISAKMP)
 Know the difference between tunnel and transport mode, AH and ESP, and how many security
associations you need.
Tunnel and Transport Modes

IPSec can be run in either tunnel mode or transport mode. Each of these modes has its own
particular uses and care should be taken to ensure that the correct one is selected for the solution:

 Tunnel mode is most commonly used between gateways, or at an end-station to a gateway,


the gateway acting as a proxy for the hosts behind it.

 Transport mode is used between end-stations or between an end-station and a gateway, if


the gateway is being treated as a host—for example, an encrypted Telnet session from a
workstation to a router, in which the router is the actual destination.

As you can see in the Figure 1 graphic below, basically transport mode should be used for end-to-
end sessions and tunnel mode should be used for everything else.

FIGURE: 1

Tunnel and transport modes in IPSec.


Figure 1 above displays some examples of when to use tunnel versus transport mode:
 Tunnel mode is most commonly used to encrypt traffic between secure IPSec gateways,
such as between the Cisco router and PIX Firewall (as shown in example A in Figure 1). The
IPSec gateways proxy IPSec for the devices behind them, such as Alice's PC and the HR
servers in Figure 1. In example A, Alice connects to the HR servers securely through the
IPSec tunnel set up between the gateways.
 Tunnel mode is also used to connect an end-station running IPSec software, such as the
Cisco Secure VPN Client, to an IPSec gateway, as shown in example B.
 In example C, tunnel mode is used to set up an IPSec tunnel between the Cisco router and a
server running IPSec software. Note that Cisco IOS software and the PIX Firewall sets
tunnel mode as the default IPSec mode.
 Transport mode is used between end-stations supporting IPSec, or between an end-station
and a gateway, if the gateway is being treated as a host. In example D, transport mode is
used to set up an encrypted Telnet session from Alice's PC running Cisco Secure VPN
Client software to terminate at the PIX Firewall, enabling Alice to remotely configure the PIX
Firewall securely.

AH Tunnel Versus Transport Mode


Figure 2 above, shows the differences that the IPSec mode makes to AH. In transport mode, AH
services protect the external IP header along with the data payload. AH services protect all the fields
in the header that don't change in transport. The header goes after the IP header and before the
ESP header, if present, and other higher-layer protocols.
As you can see in Figure 2 above, In tunnel mode, the entire original header is authenticated, a new
IP header is built, and the new IP header is protected in the same way as the IP header in transport
mode.
AH is incompatible with Network Address Translation (NAT) because NAT changes the source IP
address, which breaks the AH header and causes the packets to be rejected by the IPSec peer.

ESP Tunnel Versus Transport Mode


Figure 3 above shows the differences that the IPSec mode makes to ESP. In transport mode, the IP
payload is encrypted and the original headers are left intact. The ESP header is inserted after the IP
header and before the upper-layer protocol header. The upper-layer protocols are encrypted and
authenticated along with the ESP header. ESP doesn't authenticate the IP header itself.
NOTE: Higher-layer information is not available because it's part of the encrypted payload.
When ESP is used in tunnel mode, the original IP header is well protected because the entire
original IP datagram is encrypted. With an ESP authentication mechanism, the original IP datagram
and the ESP header are included; however, the new IP header is not included in the authentication.
When both authentication and encryption are selected, encryption is performed first, before
authentication. One reason for this order of processing is that it facilitates rapid detection and
rejection of replayed or bogus packets by the receiving node. Prior to decrypting the packet, the
receiver can detect the problem and potentially reduce the impact of denial-of-service attacks.
ESP can also provide packet authentication with an optional field for authentication. Cisco IOS
software and the PIX Firewall refer to this service as ESP hashed message authentication
code (HMAC). Authentication is calculated after the encryption is done. The current IPSec standard
specifies which hashing algorithms have to be supported as the mandatory HMAC algorithms.
The main difference between the authentication provided by ESP and AH is the extent of the
coverage. Specifically, ESP doesn't protect any IP header fields unless those fields are
encapsulated by ESP (tunnel mode).
Exam Tip:-

 AH provides integrity and authentication and ESP provides integrity, authentication and
encryption.
 ESP provides authentication, integrity, and confidentiality, which protect against data
tampering and, most importantly, provide message content protection.
 ESP can be operated in either tunnel mode (where the original packet is encapsulated into a
new one) or transport mode (where only the data payload of each packet is encrypted,
leaving the header untouched).

Implement site and facility security controls

 Fault – a temporary loss of power


 Blackout – Power is completely off
 Sag – or dips. When voltage briefly drops below acceptable operating range
 Brownout – A prolonged sag
 Spike – or transients. Voltage briefly rises above acceptable operating range. Very brief, and
should be absorbed by surge protectors
 Noise – signals emitted from cabeling
 HVAC should keep rooms with electronics between 60-75 degrees fahrenheit/15-23 degrees
celsius. Humidity should be mainteained between 40-60%.
o Humidity too low can generate static discharges up to 20,000 volts. Only a fraction of that is
needed to damage electronic components.
 Locate server rooms and other critical components away from any water source.
 Know what mantraps and turnstiles look like
 A preaction system is the best water-based suppression system.
 Closed-head systems should be avoided as they actively contain water and present a risk in
the event of failure
 FEMA (Federal Emergency Management Agency) provides flood risk data for locations in th US
o 500-Year Flood Plain indicates there is a 1-in-500 (.002%) chance a flood will occur in the given
location in any given year
 Types of fire extinguishers:
 Class A - Uses water to suppress common combustibles
 Class B – uses CO2, halon or soda acid to suppress liquid-based fires
 Class C – Use CO2 or other suppressants and suppress electric fires
 Class D – Use dry powder to suppress metal fires
Cryptographic Attacks
The basic intention of an attacker is to break a cryptosystem and to find the plaintext from the
ciphertext. To obtain the plaintext, the attacker only needs to find out the secret decryption key, as
the algorithm is already in public domain.
Hence, he applies maximum effort towards finding out the secret key used in the cryptosystem. Once
the attacker is able to determine the key, the attacked system is considered
as broken or compromised.
Based on the methodology used, attacks on cryptosystems are categorized as follows −
 Ciphertext Only Attacks (COA) − In this method, the attacker has access to a set of
ciphertext(s). He does not have access to corresponding plaintext. COA is said to be
successful when the corresponding plaintext can be determined from a given set of ciphertext.
Occasionally, the encryption key can be determined from this attack. Modern cryptosystems
are guarded against ciphertext-only attacks.
 Known Plaintext Attack (KPA) − In this method, the attacker knows the plaintext for some
parts of the ciphertext. The task is to decrypt the rest of the ciphertext using this information.
This may be done by determining the key or via some other method. The best example of
this attack is linear cryptanalysis against block ciphers.
 Chosen Plaintext Attack (CPA) − In this method, the attacker has the text of his choice
encrypted. So he has the ciphertext-plaintext pair of his choice. This simplifies his task of
determining the encryption key. An example of this attack is differential cryptanalysis applied
against block ciphers as well as hash functions. A popular public key cryptosystem, RSA is
also vulnerable to chosen-plaintext attacks.
 Dictionary Attack − This attack has many variants, all of which involve compiling a ‘dictionary’.
In simplest method of this attack, attacker builds a dictionary of ciphertexts and corresponding
plaintexts that he has learnt over a period of time. In future, when an attacker gets the
ciphertext, he refers the dictionary to find the corresponding plaintext.
 Brute Force Attack (BFA) − In this method, the attacker tries to determine the key by
attempting all possible keys. If the key is 8 bits long, then the number of possible keys is 28 =
256. The attacker knows the ciphertext and the algorithm, now he attempts all the 256 keys
one by one for decryption. The time to complete the attack would be very high if the key is
long.
 Birthday Attack − This attack is a variant of brute-force technique. It is used against the
cryptographic hash function. When students in a class are asked about their birthdays, the
answer is one of the possible 365 dates. Let us assume the first student's birthdate is 3rd Aug.
Then to find the next student whose birthdate is 3rd Aug, we need to enquire 1.25* √365 ≈
25 students.
Similarly, if the hash function produces 64 bit hash values, the possible hash values are
1.8x1019. By repeatedly evaluating the function for different inputs, the same output is
expected to be obtained after about 5.1x109 random inputs.
If the attacker is able to find two different inputs that give the same hash value, it is
a collision and that hash function is said to be broken.
 Man in Middle Attack (MIM) − The targets of this attack are mostly public key cryptosystems
where key exchange is involved before communication takes place.
o Host A wants to communicate to host B, hence requests public key of B.
o An attacker intercepts this request and sends his public key instead.
o Thus, whatever host A sends to host B, the attacker is able to read.
o In order to maintain communication, the attacker re-encrypts the data after reading
with his public key and sends to B.
o The attacker sends his public key as A’s public key so that B takes it as if it is taking
it from A.
 Side Channel Attack (SCA) − This type of attack is not against any particular type of
cryptosystem or algorithm. Instead, it is launched to exploit the weakness in physical
implementation of the cryptosystem.
 Timing Attacks − They exploit the fact that different computations take different times to
compute on processor. By measuring such timings, it is be possible to know about a particular
computation the processor is carrying out. For example, if the encryption takes a longer time,
it indicates that the secret key is long.
 Power Analysis Attacks − These attacks are similar to timing attacks except that the amount
of power consumption is used to obtain information about the nature of the underlying
computations.
 Fault analysis Attacks − In these attacks, errors are induced in the cryptosystem and the
attacker studies the resulting output for useful information.
Practicality of Attacks
The attacks on cryptosystems described here are highly academic, as majority of them come from
the academic community. In fact, many academic attacks involve quite unrealistic assumptions about
environment as well as the capabilities of the attacker. For example, in chosen-ciphertext attack, the
attacker requires an impractical number of deliberately chosen plaintext-ciphertext pairs. It may not
be practical altogether.
Nonetheless, the fact that any attack exists should be a cause of concern, particularly if the attack
technique has the potential for improvement.
Domain 4: Communication and Network Security
Implement secure design principles in network architectures

 OSI Model – Please Do Not Teach Students Pointless Acronyms. Developed by ISO

 Encapsulation is when the payload has the headers and footers added as the message goes down
layers. Decapsulation is the unwinding of the message as it travels back up. This means data has
the most information at the physical layer
Functions and Protocols in the OSI Model
For the exam, you will need to know the functionality that takes place at the different layers of the
OSI model, along with specific protocols that work at each layer. The following is a quick overview of
each layer and its components.

Application

The protocols at the application layer handle file transfer, virtual terminals, network management,
and fulfilling networking requests of applications. A few of the protocols that work at this layer include

 File Transfer Protocol (FTP)


 Trivial File Transfer Protocol (TFTP)
 Simple Network Management Protocol (SNMP)
 Simple Mail Transfer Protocol (SMTP)
 Telnet
 Hypertext Transfer Protocol (HTTP)

Presentation

The services of the presentation layer handle translation into standard formats, data compression
and decompression, and data encryption and decryption. No protocols work at this layer, just
services. The following lists some of the presentation layer standards:
 American Standard Code for Information Interchange (ASCII)
 Extended Binary-Coded Decimal Interchange Mode (EBCDIC)
 Tagged Image File Format (TIFF)
 Joint Photographic Experts Group (JPEG)
 Motion Picture Experts Group (MPEG)
 Musical Instrument Digital Interface (MIDI)
Session

The session layer protocols set up connections between applications; maintain dialog control; and
negotiate, establish, maintain, and tear down the communication channel. Some of the protocols that
work at this layer include

 Network File System (NFS)


 NetBIOS
 Structured Query Language (SQL)
 Remote procedure call (RPC)
Transport

The protocols at the transport layer handle end-to-end transmission and segmentation of a data
stream. The following protocols work at this layer:

 Transmission Control Protocol (TCP)


 User Datagram Protocol (UDP)
 Secure Sockets Layer (SSL)/Transport Layer Security (TLS)
 Sequenced Packet Exchange (SPX)

Network

The responsibilities of the network layer protocols include internetworking service, addressing, and
routing. The following lists some of the protocols that work at this layer:

 Internet Protocol (IP)


 Internet Control Message Protocol (ICMP)
 Internet Group Management Protocol (IGMP)
 Routing Information Protocol (RIP)
 Open Shortest Path First (OSPF)
 Internetwork Packet Exchange (IPX)
Data Link

The protocols at the data link layer convert data into LAN or WAN frames for transmission and
define how a computer accesses a network. This layer is divided into the Logical Link Control (LLC)
and the Media Access Control (MAC) sublayers. Some protocols that work at this layer include the
following:

 Address Resolution Protocol (ARP)


 Reverse Address Resolution Protocol (RARP)
 Point-to-Point Protocol (PPP)
 Serial Line Internet Protocol (SLIP)
 Ethernet
 Token Ring
 FDDI
 ATM
Physical

Network interface cards and drivers convert bits into electrical signals and control the physical
aspects of data transmission, including optical, electrical, and mechanical requirements.The
following are some of the standard interfaces at this layer:

 EIA-422, EIA-423, RS-449, RS-485


 10BASE-T, 10BASE2, 10BASE5, 100BASE-TX, 100BASE-FX, 100BASE-T,1000BASE-T,
1000BASE-SX
 Integrated Services Digital Network (ISDN)
 Digital subscriber line (DSL)
 Synchronous Optical Networking (SONET)
For your exam you should know below information about Media Access Technologies:
 Carrier Sense Multiple Access (CSMA)
Carrier sense multiple access (CSMA) is a probabilistic media access control (MAC) protocol
in which a node verifies the absence of other traffic before transmitting on a shared
transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum.

Carrier sense means that a transmitter uses feedback from a receiver to determine whether
another transmission is in progress before initiating a transmission. That is, it tries to detect
the presence of a carrier wave from another station before attempting to transmit. If a carrier
is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission. In other words, CSMA is based on the principle "sense before transmit" or
"listen before talk".

Multiple access means that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations connected to the
medium.

CSMA with Collision Detection (CSMA/CD) (For Wired)

Carrier Sense Multiple Access With Collision Detection (CSMA/CD) is a media access
control method used most notably in local area networking using early Ethernet technology.
It uses a carrier sensing scheme in which a transmitting data station detects other signals
while transmitting a frame, and stops transmitting that frame, transmits a jam signal, and
then waits for a random time interval before trying to resend the frame.

CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is used
to improve CSMA performance by terminating transmission as soon as a collision is
detected, thus shortening the time required before a retry can be attempted.
Carrier sense multiple access with collision avoidance (CSMA/CA) (For Wireless)

Carrier sense multiple access with collision avoidance (CSMA/CA) in computer networking, is a
network multiple access method in which carrier sensing is used, but nodes attempt to avoid
collisions by transmitting only when the channel is sensed to be "idle".[1][2] When they do transmit,
nodes transmit their packet data in its entirety.

It is particularly important for wireless networks, where the collision detection of the alternative
CSMA/CD is unreliable due to the hidden node problem.

CSMA/CA is a protocol that operates in the Data Link Layer (Layer 2) of the OSI model.
TCP/IP Model

 Developed originally by DoD. Only has 4 layers. Comparison chart given above.

Exam Tips

 Throughput - Throughput is the actual rate that information is transferred


 Latency - Latency is the delay between the sender and the receiver decoding it, this is mainly a
function of the signals travel time, and processing time at any nodes the information traverses
 Jitter - Jitter is the variation in the time of arrival at the receiver of the information
 Error Rate - Error rate is the number of corrupted bits expressed as a percentage or fraction of
the total sent bits.

VPN Protocols on OSI Model:

L2TP Protocol works at data link layer. L2TP and PPTP were both designed for individual client to
server connections; they enable only a single point-to-point connection per session. Dial-up VPNs
use L2TP often. Both L2TP and PPTP operate at the data link layer (layer 2) of the OSI model.
PPTP uses native PPP authentication and encryption services and L2TP is a combination of PPTP
and Layer 2 Forwarding protocol (L2F).
Summary of the Tunneling protocols:
Point-to-Point Tunneling Protocol (PPTP):

 Works in a client/server model


 Extends and protects PPP connections
 Works at the data link layer
 Transmits over IP networks only
Layer 2 Tunneling Protocol (L2TP):

 Hybrid of L2F and PPTP


 Extends and protects PPP connections
 Works at the data link layer
 Transmits over multiple types of networks, not just IP
 Combined with IPSec for security
IPSec:

 Handles multiple VPN connections at the same time


 Provides secure authentication and encryption
 Supports only IP networks
 Focuses on LAN-to-LAN communication rather than user-to-user
 Works at the network layer, and provides security on top of IP
Secure Sockets Layer (SSL):

 Works at the transport layer and protects mainly web-based traffic


 Granular access control and configuration are available
 Easy deployment since SSL is already embedded into web browsers
 Can only protect a small number of protocol types, thus is not an infrastructure-level VPN
solution

LAN Technologies, Protocols, and Network Topologies

 Ethernet – today its used in a physical star topology with twisted pair cables
 Bus – A straight line of devices. A is connected to B, which is connected to C. A single cable break
brings the network down.
 Ring – A is connected to Z and B, B is connected to A and C, and so on. Doesn’t really improve on
a Bus topology
 Star – Ethernet uses a star. Everything is connected to a central hub/switch/whatever. A cable
break only affects that single node. This provides fault tolerance.
 Mesh – everything is connected to everything.
 MAC Addresses – Media Access Control. 48 bits long. First 24 bits form the OUI, last 24 bit identify
the specific device.
 EUI-64 MAC Address – created for 64-bit MA addresses. OUI is still 24 bits, but the serial
number is the last 40. Probably for IPv6
 ARP resolves IP addresses to MAC addresses
 ARP Cache Poisoning occurs when an attacker sends fake responses to ARP requests. This
can be countered by hardcoding ARP entries
 IPv4 – 32-bit address written as four bytes in decimal (x.x.x.x)

 CIDR – allows for many network sizes (ie, subnetting)


 Class A network is /8
 Class b is /16
 Class C is /24
 Single IP is /32
 NAT – hides private IP addresses behind a single public IP. A Pool NAT would be multiple public
IPs
 IPv6 – addresses are 128-bit instead of IPv4’s 32-bit addresses. Provides. 340 undecillion
addresses. Routing and address assigning are easier through auto configuration using a host’s
MAC address. This removes the need for DHCP.
 TCP 3-way handshake – SYN > SYN/ACK > ACK
 Ports:
 Well known – 0-1023
 Registered – 1024-49151
 Dynamic/Private/Ephemeral – 49152-65525
 Socket – IP and port: 10.10.1 0.100:443
 ICMP – ping, tracert, netstat, etc. Used to troubleshoot and report error conditions.
 FTP – many varieties. TCP port 21 (control collection), and TCP 20 (data collection)
 SFTP – port 22 uses SSH to add security
 FTPS – uses TLS to add security
 TFTP – UDP 69. Used for bootstrapping

Application Layer Protocols

 SMTP – TCP 22. Send email between servers


 Secure SMTP uses port 456
 POP – TCP 110, downloads email to a local client from server
 Secure POP uses port 995
 IMAP – TCP port 143, downloads email to local client like POP
 Secure IMAP is 993
 S/MIME- allows attachments and foreign character sets in email. Uses PKI to encrypt and
authenticate MIME-encoded email
 DNS – TCP 53. Resolves domain names to IP addresses. Handles zone transfers
 SOA – start of authority server. Contains the master record for the zone
 Weaknesses of DNS:
 Uses UDP
 No authentication
 DNSSEC – adds authentication and integrity to DNS responses and uses PKI, but offers no
confidentiality. Like a digital signature.
 SNMP – UDP 161. Used to monitor and control network devices. “Community string” is transmitted
in plain text in v1 and v2. SNMPv3 adds encryption
 HTTP and HTTPS
 DHCP – UDP 67 for servers and 68 for clients

TFTP (Trivial File Transfer Protocol) is sometimes used to transfer configuration files from equipment
such as routers but the primary difference between FTP and TFTP is that TFTP does not require
authentication. Speed and ability to automate are not important.
Both of these protocols (FTP and TFTP) can be used for transferring files across the Internet. The
differences between the two protocols are explained below:
 FTP is a complete, session-oriented, general purpose file transfer protocol. TFTP is used as
a bare-bones special purpose file transfer protocol.
 FTP can be used interactively. TFTP allows only unidirectional transfer of files.
 FTP depends on TCP, is connection oriented, and provides reliable control. TFTP depends
on UDP, requires less overhead, and provides virtually no control.
 FTP provides user authentication. TFTP does not.
 FTP uses well-known TCP port numbers: 20 for data and 21 for connection dialog. TFTP
uses UDP port number 69 for its file transfer activity.
 The Windows NT FTP server service does not support TFTP because TFTP does not
support authentication.
 Windows 95 and TCP/IP-32 for Windows for Workgroups do not include a TFTP client
program.

4.2 Secure network components

Hardware
 Hub – layer 1 device. Provides no security, confidentiality and security because it does not isolate
traffic. Half duplex, meaning it cannot send and receive simultaneously.
 Repeater – has two ports. Receives traffic on one port and repeats it out the other
 Switches – uses a SPAN (cisco) or mirror port to mirror all traffic through this particular port,
normally to send it to an IDS/IPS. One issue here can be bandwidth overload.
 Routers – layer 3 device routes traffic from one network to another. Often times routers are default
gateways
VLANS
 Separate broadcast domains, segment traffic which provides defense in depth
Firewalls

 All firewalls are multi-homed, meaning they are connected to multiple networks (WAN and LAN)
 Allow/block traffic using:
 Ingress rules – traffic coming in
 Egress rules – traffic going out
 Generally deployed between a private network and a link to the internet.
 Use an “implicit deny” rule
 Rules at the top of an ACL take priority. Traffic that meets the first applicable rule will be used.
 Screened-Host Architecture -when a router forces traffic to only go to a Bastion Host, which alone
can access LAN. A Bastion Host is a heavily secured device, such as a firewall, that would then
allow traffic to LAN. Creates a SPOF.
 DMZ - “perimeter” or “edge” network. Two firewalls, public available resources sit in between them
to allow things like HTTPS and DNS through. The second firewall would stop anything from coming
into the internal network. DMZs can be accomplished with a single firewall, but creates
opportunities for misconfiguration
 SEVERAL TYPES OF FIREWALLS:
Packet Filtering – works at layer 3 where the PDU is the packet. Filtering decisions are made on
each individual packet. Just looks at IP addresses and port numbers (header)
Stateful – stores info about who initiates a session, and blocks unsolicited communications
(nothing from the outside that didn’t originate internally can get through). This information is
stored on the firewall’s “state table”, which can be circumvent by flooding it with communication
requests.
Application Level Firewall – act as an intermediary server and proxies connection between
client and application server. They can see the entire packet as the packet won’t be encrypted
until layer 6. Other firewalls can only inspect the packet but not the payload. Application
firewalls can then detect unwanted applications or services attempting to bypass the firewall
Next-Gen Firewalls – bundle a ton of shit together, such as deep-packet inspected, application-
level inspection, IDS/IPS, integrating threat intel, etc.

FIREWALL GENERATIONS

Packet Filtering Firewall - First Generation


 Screening Router
 Operates at Network and Transport level
 Examines Source and Destination IP Address
 Can deny based on ACLs
 Can specify Port
Application Level Firewall - Second Generation
 Proxy Server
 Copies each packet from one network to the other
 Masks the origin of the data
 Operates at layer 7 (Application Layer)
 Reduces Network performance since it has do analyze each packet and decide what
to do with it.
 Also Called Application Layer Gateway
Stateful Inspection Firewalls – Third Generation
 Packets Analyzed at all OSI layers
 Queued at the network level
 Faster than Application level Gateway
 Dynamic Packet Filtering Firewalls – Fourth Generation
 Allows modification of security rules
 Mostly used for UDP
 Remembers all of the UDP packets that have crossed the network’s perimeter, and it
decides whether to enable packets to pass through the firewall.
Kernel Proxy – Fifth Generation
 Runs in NT Kernel
 Uses dynamic and custom TCP/IP-based stacks to inspect the network packets and
to enforce security policies.

Cabling
 Electromagnetic Interference – caused by electricity and causes unwanted signals, or noise.
 Crosstalk occurs when one wire leaks into another
 Attenuation is the weakening of a signal as it travels further from the source
 Twisted Pair cabling is the most common type of cabling. They are copper cables twisted with a
pair like ethernet
 Unshielded Twisted Pair are the twisted pairs inside the cable. The twists provide protection
against EMI.
 Shielded Twisted Pair has a sheath around each individual pair. This provides better
protection than UTP, but it is more expensive and more difficult to work with. Prices of fiber
are getting low enough that STP doesn’t make sense
 Coax cable – More resistant to EMI than UTP or STP, and provides a higher bandwidth
 Fiber Optic Cable – uses light pulses. Cable is made of glass so it is very fragile. It is immune to
EMI and much faster than coax. Several types:
 Multimode – many paths of light. Shorter distance and lower bandwidth. Uses LED as its light
source
 Single mode – one path, long haul, long side. Uses a laser
 Multiplexing – sends multiple signals over different colors of light. Exceeds speeds of 10GB

For your exam you should know below information about WAN Technologies:

Point-to-point protocol

PPP (Point-to-Point Protocol) is a protocol for communication between two computers using a serial
interface, typically a personal computer connected by phone line to a server. For example, your
Internet server provider may provide you with a PPP connection so that the provider's server can
respond to your requests, pass them on to the Internet, and forward your requested Internet
responses back to you.
PPP uses the Internet protocol (IP) (and is designed to handle other protocol as well). It is
sometimes considered a member of the TCP/IP suite of protocols. Relative to the Open Systems
Interconnection (OSI) reference model, PPP provides layer 2 (data-link layer) service. Essentially, it
packages your computer's TCP/IP packets and forwards them to the server where they can actually
be put on the Internet.
PPP is a full-duplex protocol that can be used on various physical media, including twisted pair or
fiber optic lines or satellite transmission. It uses a variation of High Speed Data Link Control (HDLC)
for packet encapsulation.
PPP is usually preferred over the earlier de facto standard Serial Line Internet Protocol (SLIP)
because it can handle synchronous as well as asynchronous communication. PPP can share a line
with other users and it has error detection that SLIP lacks. Where a choice is possible, PPP is
preferred.

Image from: http://withfriendship.com/images/g/31728/a-pointtopoint-protocol.png


X.25
 X.25 is an ITU-T standard protocol suite for packet switched wide area network (WAN)
communication.
 X.25 is a packet switching technology which uses carrier switch to provide connectivity for
many different networks.
 Subscribers are charged based on amount of bandwidth they use. Data are divided into 128
bytes and encapsulated in High Level Data Link Control (HDLC).
 X.25 works at network and data link layer of an OSI model.

Image from: http://www.sangoma.com/assets/images/content/tutorials_x25_1.gif

Frame Relay
 Works as packet switching
 Operates at data link layer of an OSI model
 Companies that pay more to ensure that a higher level of bandwidth will always be available,
pay a committed information rate or CIR
1. Two main types of equipment are used in Frame Relay

1. Data Terminal Equipment (DTE) - Usually a customer owned device that provides
connectivity between company's own network and the frame relay's network.
2. Data Circuit Terminal Equipment (DCE) - Service provider device that does the actual data
transmission and switching in the frame relay cloud.

The Frame relay cloud is the collection of DCE that provides that provides switching and data
communication functionality. Frame relay is any to any service.

Image from: http://www.cpcstech.com/images/frame-2.jpg

Integrated Service Digital Network (ISDN) : Enables data, voice and other types of traffic to travel
over a medium in a digital manner previously used only for analog voice transmission.
Runs on top of the Plain Old Telephone System (POTS). The same copper telephone wire is used.
Provide digital point-to-point circuit switching medium.

Image from: http://www.hw-server.com/obrazek/network_topology


Asynchronous Transfer Mode (ATM)
 Uses Cell switching method
 High speed network technology used for LAN, MAN and WAN
 Like frame relay it is connection oriented technology which creates and uses fixed channel
 Data are segmented into fixed size cell of 53 bytes
 Some companies have replaces FDDI back-end with ATM
Image from: http://html.rincondelvago.com/000050700.png

Multiprotocol Label Switching (MPLS)

Multiprotocol Label Switching (MPLS) is a standard-approved technology for speeding up network


traffic flow and making things easier to manage. MPLS involves setting up a specific path for a
given sequence of packets, identified by a label put in each packet, thus saving the time needed for
a router to look up the address to the next node to forward the packet to.
MPLS is called multiprotocol because it works with the Internet Protocol (IP), Asynchronous
Transport Mode (ATM), and frame relay network protocols.
In reference to the Open Systems Interconnection, or OSI model, MPLS allows most packets to be
forwarded at Layer 2 (switching) level rather than at the Layer 3 (routing) level.
In addition to moving traffic faster overall, MPLS makes it easy to manage a network for quality of
service (QoS). For these reasons, the technique is expected to be readily adopted as networks
begin to carry more and different mixtures of traffic.

VOIP Threat Type & Impact:


Graphic above from: https://www.giac.org/paper/gcia/1461/voip-security-vulnerabilities/112292

1. Class A, the addresses are 0.0.0.0 – 127.255.255.255.


2. For Class B networks, the addresses are 128.0.0.0 – 191.255.255.255.
3. For Class C, the addresses are 192.0.0.0 – 223.255.255.255.
4. For Class D, the addresses are 224.0.0.0 – 239.255.255.255.
5. For Class E, the addresses are 240.0.0.0 – 255.255.255.255.

Classless Internet Domain Routing (CIDR) High Order bits are shown in bold below.

1. For Class A, the addresses are 0.0.0.0 - 127.255.255.255 The lowest Class A address is
represented in binary as 00000000.00000000.0000000.00000000
2. For Class B networks, the addresses are 128.0.0.0 - 191.255.255.255. The lowest Class B
address is represented in binary as 10000000.00000000.00000000.00000000
3. For Class C, the addresses are 192.0.0.0 - 223.255.255.255 The lowest Class C address is
represented in binary as 11000000.00000000.00000000.00000000
4. For Class D, the addresses are 224.0.0.0 - 239.255.255.255 (Multicast) The lowest Class D
address is represented in binary as 11100000.00000000.00000000.00000000
5. For Class E, the addresses are 240.0.0.0 - 255.255.255.255 (Reserved for future usage) The
lowest Class E address is represented in binary as
11110000.00000000.00000000.00000000 Classful IP Address Format
Domain 5: Identity and Access Management
IAAA Five elements:
Identification – claiming to be someone
Authentication – proving you are that person
Authorization – allows you to access resources
Auditing – records a log of what you do
Accounting – reviews log files to hold subjects accountable
 Non-repudiation – prevents entities from denying they took an action. This is
accomplished by auditing and digital signatures

Manage identification and authentication of people, devices and services

Authentication
 5 Types
 Type 1 – something you know
 Type 2 – something you have. Tokens, smart cards, ID badge, etc.
 Micro cards have a magnetic strip with info. They are easily copied
 Smartcards utilize microprocessors and cryptographic certificates. Often paired with a PIN
 Type 3 – something you are
 Type 4 – Somewhere you are (IP address/location)
 Type 5 – Something you do – signature, pattern lock
 Types of Biometric Authentication Errors:
 Type 1 – when a valid subject is not authenticated. Also known as False Rejection Rate (FRR)
 Type 2 – when an invalid subject is incorrectly authenticated. Also known as False
Acceptance Rate (FAR)
 The point where these intersect is called the Crossover Error Rate (CER) and is used as a metric
for evaluating biometric authentication solutions. This is discussed later in more detail
 Any combination of these is 2FA or multifactor authentication
 Types of passwords
 Static – just a normal password. Most common and weakest type
 Passphrases – long, static passwords combining multiple words
 One-time passwords – Very secure but can be hard to implement across the board
 Dynamic – tokens like FreeOTP and RSA
 Cognitive – like recovery questions

Password Attacks

 Passwords are located in SAM on Windows and etc/passwd in Linux


 Dictionary attacks
 Implement maximum attempts, lockout time, etc
 Brute Force
 Rainbow tables
 Password guessing
 Clipping levels are a subset of sampling, where alerts are created when behavior exceeds a
certain threshold.

Type 2 Authentication

 Synchronous Dynamic Token:


 syncs with a central server and uses time to change values. Examples include RSA, Googe
Authenticator, etc. Relies on timing or clock mechanisms
 Asynchronous Dynamic Token
 Not synced with a central server. Relies on start and stop flags to manage data
transmissions.
Type 3 Authentication

 Steps:
 Enrollment – initial registering of user with the biometric system, such as taking their fingerprints
 Throughput – time required for users to actually authenticate, such as swiping a badge to get in
each morning. Should not exceed 6-10 seconds
 Fingerprints are very common. They measure ridge endings, bifurcations and other details of the
finger, called minutiae. (Know that these terms are associated with fingerprinting).The entire
fingerprint isn’t normally detected. A scanner only needs to match a few points that match your
enrollment print exactly to authenticate you.
 Retina Scans look at the blood vessels in your eyes. This is the second most accurate biometric
system but is rarely used because of health risks and invasion of privacy issues by revealing health
information
 Iris scan – looks at the colored portion of your eye. Works through contact lenses and glasses.
Each person’s two irises are unique, even among twins. This is the most accurate biometric
authentication factor.The primary benefit of iris scanning is in the fact that irises do not change as
often as other biometric factors
 Hand Geometry/Palm Scans - require a second form of authentication. They aren’t
reliable and can’t even determine if a person is alive
 Keyboard dynamics – rhythm of keypresses, how hard someone presses each key, speed of
typing. Cheap to implement, somewhat effective.
 Signature Dynamics – same thing, just a physical signature
 Voiceprint – not secure, vulnerable to recordings, voices may change due to illness and other
factors.
 Facial Scans – like iPhone face-unlock feature.
 All biometric factors can give incorrect results and are subject to:
 False Negatives – “False Rejection Rate (FRR)” Type 1 Error. Incorrectly rejects someone
 False Positive – “False Acceptance Rate (FAR)” Type 2 Error. Incorrectly allows access
Always remember Type 2 is greater than Type 1 hence it causes more damage to security.
 You must increase sensitivity until you reach an acceptable Crossover Error Rate (CER), which
is where FAR and FRR intersect. Lower is better, so use this as a metric when comparing
vendor products
 Reasons against biometrics:
 Many people feel it is intrusive and has health concerns
 Time for enrollment and verification can be excessive
 No way to revoke biometrics

Integrate identity as a third-party service


Identity Management

 Centralized Access Control – uses one logical point for access, like a domain controller. Can
provide SSO and AAA services (Authentication, Authorization and Accountability).
 SSO is more convenient because a user only has to authenticate once. Examples include
Kerberos and Sesame (EU version of Kerberos). A Federation refers to two or more
companies that share IAM systems for SSO.
 A Federated Identity is an identity that can be used across those different services
 Finding a common language is often a challenge with federations.
 SAML – security assertion markup language is commonly used to exchange authentication and
authorization info between federated organizations. Used to provide SSO capabilities for
browser access.
 OpenID is similar to SSO, but is an open standard for user authentication by third parties.
 Oauth is an open standard for authorization (not authentication) to third parties. Ex: if you have a
LinkedIn account, the system might ask you to let it have access to your Google contacts
 OAuth2 combines authentication and authorization and is quickly removing the need for
OpenID.

Exam Tip :

XACML is the OASIS standard that is most commonly used by SDN systems. XACML is an
Extensible Markup Language (XML)-based open standard developed by OASIS that is typically used
to define access control policies. These policies can be either attribute-based or role-based.

SAML is an OASIS standard that is most commonly used by web applications to create a single
sign-on (SSO) experience. SAML is an XML-based open standard. SAML can be used to exchange
authentication and authorization information. Similar to OpenID and Windows Live ID, a SAML-
based system enables a user to gain access to multiple independent systems on the Internet after
having authenticated to only one of those systems.

SPML is an XML-based open standard developed by OASIS. SPML is used for federated identity
SSO. Unlike SAML, SPML is also based on Directory Services Markup Language (DSML). DSML is
an XML-based technology that can be used to present Lightweight Directory Access Protocol
(LDAP) information in XML format.

OAuth 2.0 is not an OASIS standard. Instead, OAuth 2.0 is an open standard defined in Request for
Comments (RFC) 6749. OAuth 2.0 is an authorization framework that provides a third-party
application with delegated access to resources without providing the owner's credentials to the
application. There are four roles in OAuth: the resource owner, the client, the resource server, and
the authorization server. The resource owner is typically an end user. The client is a third-party
application that the resource owner wants to use. The resource server hosts protected resources,
and the authorization server issues access tokens after successfully authenticating the resource
owner; the resource server and authorization server are often the same entity but can be separate.
Domain 7: Security Operations

Forensics

 Digital Forensics – focuses on the recovery and investigation of material found in digital devices,
often related to computer crime. Closely related to incident response as it is based on gathering
and protecting evidence. Biggest difference is that it’s not what you know, it’s what you can prove in
court. Evidence is much more valuable.
International Organization of Computer Evidence’s 6 Principles for Computer Forensics:
1. All of the general forensic and procedural principles must be applied
2. Actions taken should not change evidence
3. Person investigating should be trained for the purpose
4. All activity must be fully documented, preserved and available for review
5. An individual is responsible for all actions taken with respect to digital evidence
6. Any agency, which is responsible for seizing/accessing/storing/transferring digital evidence is
responsible for compliance with these principles.
 Binary images are required for forensics work. You never work on the original media. A binary
image is exactly identical to the original, including deleted files.
 Certified forensic tools include Norton Ghost, FTK Imager and EnCase
 Four types of disk-based forensic data:
 Allocated space – normal files
 Unallocated space – deleted files
 Slack space – leftover space at the end of clusters. Contains fragments of old files
 Bad blocks – ignored by OS. May contain hidden data

Understand requirements for investigation types

 Criminal vs. Civil Law – Criminal law has higher standards for evidence.
 Administrative Law
 Also called Regulatory Law. Consists of regulations like HIPAA
 Legal aspects of investigations
 Evidence –Real Evidence (physical objects), Direct Evidence (Witness testimony),
Circumstantial Evidence (Indirect evidence of guilt, can support other evidence but is inadequate
for a conviction alone.)
 Best evidence is going to be the original documents/hard drives etc
 Secondary would include copies of original evidence, log files, etc
 Evidence integrity contingent on hashes
 Hearsay is inadmissible in court. In order for evidence to be admissible, it must be relevant to a
fact at issue, the fact must be material to the case, and the evidence mast have been legally
collected.
 Evidence can be surrendered, obtained through a subpoena which compels the owner to
surrender it, or forcefully obtained before a subject has an opportunity to alter it through a
warrant
 Chain of custody – documents entirely where evidence is at all times. If there is any lapse, the
evidence will be deemed inadmissible
 Reasonable Searches - 4th amendment protects against unreasonable search and seizure.
Searches require warrants
 Exceptions when an object is in plain site, at public checkpoints, or exigent circumstances
(immediate threat to human life or evidence being destroyed).
Conduct logging and monitoring activities

Incident Response and Management

Involves the monitoring and detection of security events. Need clear processes and responses
What is a computer security incident? - an event that has a negative outcome vis-a-vis C I A against
one or more assets as the result of a deliberate attack or intentional malicious action on the part of
one or more users.

NIST SP 800-61 R2 "Computer Security Incident Handling Guide" defines a computer security
incident as "a violation or imminent threat of violation of computer security policies, acceptable use
policies, or standard security practices."
Understand that for the exam incident = computer security incident
Incident Response Steps: (on-going & feedback loop between Lessons Learned & Detection)

Steps to incident response:

1. Detection/Identification –IDS | IPS | A/V software | Continuous Monitoring | End User Awareness.
Analyzing events to determine if a security incident has taken place through the examination of
log files.
2. Response/Containment –. The use of a Computer Security Incident Response Team (CSIRT) or
Computer Incident Response Team (CIRT) is a FORMALIZED response. Remember that volatile
evidence exists in all systems, and if they are turned off before a response team has been able to
examine them, data/evidence will be lost. Isolation is good. powering off is bad. Preventing further
damage by isolating traffic, taking the system offline, etc. You would often make binary images of
systems involved
3. Mitigation/Eradication – Containment!!! System is cleaned.
4. Reporting – up the chain within the organization & perhaps externally to Law Enforcement/official
agencies. Notifying proper personnel. Two kinds
i. Technical – appropriate technical individuals
ii. Non-technical – stakeholders, business owners, regulators and auditors
5. Recovery – putting the system back into production. Monitoring to see if the attack resumes
6. Remediation – Root cause analysis is performed to determine and patch the vulnerability that
allowed the incident. New processes to prevent recurrence are created.
7. Lessons Learned - after action meeting to determine what went wrong, what went well and what
could be improved on. Final report delivered to management.

Forensics mnemonics - I Prefer Cheesy Eggs and Potatoes Diced

IdentificationPreservationCollectionExaminationAnalysisPresentationDiscussion

Domain Name System :


There are a lot of threats to a user on his or her way to a website, one of which is a compromise to
the name resolution process. When a user wants to go to bank.com to check his balance, ensuring
that he can reach the site can be risky if name resolution is compromised. If DNS settings are
changed on the user's computer or the DNS server is compromised, the user can land on a
malicious website.

People aren't good at remembering 32-bit IP Addresses for sites they wish to visit so we created
DNS - Domain Name Service. It resolves Domain.com to an IP Address so that a web browser or
other application can find the server by domain name alone.
Older versions of Windows computers were particularly vulnerable to name resolution attacks
because a simple use account can change settings in the name resolution process. Since the user
can do it, malware can do it for the user so now www.bank.com points to a malicious IP Address
with a copy of the legitimate website, tricking users into logging in and losing their usernames and
passwords.

CWBLHD - Can We Buy Large Hard Drives used to be the mnemonic for remembering the name
resolution process:
C = Cache: Has the name already been resolved and is it in my local cache?
W = WINS Server: Does the WINS server know the IP Address of the NetBIOS host name?
B = Broadcast: Does a nearby host respond to name resolution requests?
L = LMHosts File: Check the LMHosts file in the C:\Windows\System32\drivers\etc directory?
H = Hosts File: In C:\Windows\System32\drivers\etc, you will find a file called the Hosts file. Look at
it and you'll see some example name entries. It's locked while in use now but it can be edited and
malicious names can be entered so that www.bank.com now points to a malicious IP Address.
D = DNS: Domain Name Resolution is the last step in the name resolution process when it should
have been the first and most trusted.

Anyhow, name resolution security is a critical component so take it seriously and lock down your
DNS Server.

Here's a mnemonic for BCP can someone align the stages of BCP/DRP

Plain Brown Potatoes Raise Plain Thin Men


 Policy
 BIA (Business Impact Analysis)
 Preventative Measure
 Recovery Measures
 Plan
 Testing
 Maintenance
Domain 8: Software Development Security

Software Development Life Cycle

SDLC, mnemonic 1: Info Assurance Is Out Dated, I AM INTELLIGENT OPTIMISTIC DETERMINED


1. Initiate
2. Acquire - the 'what'
3. Implement
4. Operations
5. Disposal
SDLC mnemonic 2
IDIOD pronounced IDIOT. First I is .ini and second I is implementation. Last thing you do with
anything is throw it away so second D is disposal.
1. Initiation
2. Development or Acquisition
3. Implementation ------ Certification and Accreditation here.
4. Operation
5. Disposal
Secure-SDLC, mnemonic 3. Re Do Damn Test Right)
1. Req Gather
2. Design
3. Develop
4. Test
5. Release
Mnemonic 4 SDLC: “Please fry some dead animals to catch the right man.”
Please (Project initiation) idea, objective
Fry (functional requirements) what will software do? Formalize security requirements
Some (system design specs) architecture, data flow
Dead (develop & implement) coding
Animals (Acceptance) system must meet objective – create test data, test code
To (Testing & Evaluation of Controls) tests performed, expected & unexpected
Catch (Certification & Accreditation) or Authorization – evaluate against standard
The (Transition to Production/Implementation) Go live
Right (Revisions & System Replacement) change management if revisions needed, sec audits
Man (Maintenance & Operations) fully live, system being used – risk analysis, backup & recovery
tests, re-certifications.

 Project initiation - Feasibility, cost, risk analysis, Management approval, basic security objectives
 Functional analysis and planning - Define need, requirements, review proposed security controls
 System design specifications - Develop detailed design specs, Review support documentation,
Examine security controls
 Software development - Programmers develop code. Unit testing Check modules. Prototyping,
Verification, Validation
 Acceptance testing and implementation - Separation of duties, security testing, data validation,
bounds checking, certification, accreditation, part of release control
 System Life Cycle (SLC) (extends beyond SDLC)
 Operations and maintenance - release into production.
 Certification/accreditation
 Revisions/ Disposal - remove. Sanitation and destruction of unneeded data
Change Management:

1. Request the change. Once personnel identify desired changes, they request the change.
Some organizations use internal websites, allowing individuals to submit change requests via
a web page. The website automatically logs the request in a database, which allows
personnel to track the changes. It also allows anyone to see the status of a change request.
2. Review the change. Experts within the organization review the change. Personnel reviewing a
change are typically from several different areas within the organization. In some cases, they
may quickly complete the review and approve or reject the change. In other cases, the change
may require approval at a formal change review board after extensive testing.
3. Approve/reject the change. Based on the review, these experts then approve or reject the
change. They also record the response in the change management documentation. For
example, if the organization uses an internal website, someone will document the results in
the website’s database. In some cases, the change review board might require the creation of
a rollback or back-out plan. This ensures that personnel can return the system to its original
condition if the change results in a failure.
4. Schedule and implement the change. The change is scheduled so that it can be implemented
with the least impact on the system and the system’s customer. This may require scheduling
the change during off-duty or nonpeak hours.
5. Document the change. The last step is the documentation of the change to ensure that all
interested parties are aware of it. This often requires a change in the configuration
management documentation. If an unrelated disaster requires administrators to rebuild the
system, the change management documentation provides them with the information on the
change. This ensures they can return the system to the state it was in before the change.

Together, change and configuration management techniques form an important part of the
software engineer’s arsenal and protect the organization from development-related security
issues.
The change management process has three basic components:
Request Control - provides an organized framework within which users can request
modifications, managers can conduct cost/ benefit analysis, and developers can prioritize
tasks.
Change Control - provides an organized framework within which multiple developers can
create and test a solution prior to rolling it out into a production environment. Change control
includes conforming to quality control restrictions, developing tools for update or change
deployment, properly documenting any coded changes, and restricting the effects of new
code to minimize diminishment of security.
Release Control - Once the changes are finalized, they must be approved for release
through the release control procedure.

Software Capability Maturity model (CMM):

Quality of software is a direct function of quality of development and maintenance Defined by


Carnegie Mellon University SEI (Software Engineering Institute) Describes procedures, principles,
and practices that underlie software development process maturity
1-2 REACTIVE, 3-5 PROACTIVE
5 levels
1. initiating – competent people, informal processes, ad-hoc, absence of formal process
2. repeatable – project management processes, basic life-cycle management processes
3. defined – engineering processes, presence of basic life-cycle management processes and reuse
of code, use of requirements management, software project planning, quality assurance,
configuration management practices
4. managed – product and process improvement, quantitatively controlled
5. Optimizing – continuous process improvement Works with an IDEAL model.

Programming Concepts

 Machine Code is binary language built into a CPU. Just above that is assembly language, which
are low level commands. Humans use source-code and convert it into machine code with
compilers.
 Interpreters can translate each line of code into machine language on the fly while the program
runs
 Bytecode is an intermediary form between source and machine code ready to be executed in a
Java Virtual Machine

Procedural and Object-Oriented Languages

 Procedural – uses subroutines, procedures and functions, step-by-step. Examples include C and
FORTRAN
 Object-oriented – define abstract objects through the uses of classes, attributes and methods.
Examples include C++ and Java
 A class is a collection of common methods that define actions of objects

Computer-Aided Software Engineering (CASE)

 Programs that assist in creation and maintenance of other programs. Examples:


 compilers, assemblers, linkers, translators, loaders/debuggers, program editors, code
analyzers and version control mechanisms
Databases

 Databases are structured collections of data that allow queries (searches), insertions, deletions and
updates
 Database Management Systems are designed to manage the creation, querying, updating and
administration of databases. Examples include MySQL, PostgreSQL, Microsoft SQL Server, etc
Types of databases:

Relational (RDBMS)– most common. Uses tables which are made up of rows and columns.
 A row is a database record, called a tuple. The number of rows is referred to as a table’s
cardinality.
 A column is called an attribute. The number of columns is referred to a table’s degree

 Entries in relational databases are linked by relationships:


• Candidate Keys – are a subset of attributes that can be used to uniquely ID any record in a
table. No two records will ever contain the same values composing a candidate key. A table will
normally have more than one candidate key, and there is no limit to how many candidate keys
can exist in a table
 Primary Keys – selected from the set of candidate keys for a table to be used to uniquely ID the
records of a table. Each table will only have one.

 There is no limit to how many candidate keys can be in a table, however there can only be one
Primary
 Polyinstantiation is the concept of allowing multiple records that seem to have the
same primary key values into a database at different classification levels. This
means it can be used to prevent unauthorized users from determining classified
info by noticing the absence of info normally available to them
 Foreign Keys – enforce relationships between two tables, also known as referential integrity.
This ensures that if one table contains a foreign key, it corresponds to a still-existing primary key
in the other table.

 Hierarchical

 Object-oriented – combine data with functions in an object-oriented framework. Normal


databases just contain data
 Flat file

Database Normalization

 Rules that remove redundant data and improves the integrity and availabiity of the database
 Three rules:
 First Normal Form (1NF) – divide data into tables
 Second Normal Form (2NF) – move data that is partially dependent to the primary key to
another table (all the shit in the table has to be on the same topic)
 Third normal Form (3NF) – remove data that is not dependent on the primary key

8.2 Identify and apply security controls in development environments

8.3 Assess the effectiveness of software security

8.4 Assess security impact of acquired software

8.5 Define and apply secure coding guidelines and standards

 During software testing, APIs, User Interfaces (UIs) and physical interfaces are tested

Application Development Methods

 Waterfall - has a feedback loop that allows progress one step backwards or forwards.
o Emphasis on early documentation
o The Modified Waterfall Model adds validation/verification to the process

 Spiral – improves on the two previous models because each step of the process goes through the
entire development lifecycle
 Agile – highest priority is satisfying the customer through early and continuous delivery. It does not
prioritize security.

o Agile Manifesto:
 Individuals and interactions over processes and tools
 Working software over comprehensive documentation
 Customer collaboration over contract negotiation
 Responding to change over following a plan
Software Development Lifecycle Phases

1. Initiation – Define need and purpose of project


2. Development/Acquisition – determine security requirements and incorporate them into
specifications
3. Implementation – install controls, security testing, accreditation
4. Operation – backups, training, key management, audits and monitoring etc
5. Disposal – archiving and media sanitation

Software Escrow

 Third party archives source code


 Source code is revealed if product is abandoned
 Protects the purchaser should the vendor go out of business

DevOps

 Old system had strict separation of duties between devs, quality assurance and production
 DevOps is more agile with everyone working together in the entire service lifecycle

Maturity Models

 Software Capability Maturity Model (SW-CMM) – states all software development matures through
phases in a sequential fashion. Intends to improve maturity and quality of software by implementing
an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes.
1.
1. Initial – developers are unorganized with no real plan. No defined software development process
2. Repeatable – lifecycle management processes are introduced. Code is reused.
3. Defined – devlopers operate according to a set of documented processes. All actions occur within
constraints of those processes.
4. Managed – Quantitative measures are used to understand the development process.
5. Optimizing – Processes for defect prevention, change management, and process change are
used.

Object Oriented Programming

 Java, C++, etc. Objects contain data and methods. Objects provide data hiding.
 Object – account, employee, customer, whatever
 Method – actions on an object
 Class – think of a blueprint. Defines the data and methods the object will conain. Does not contain
data or methods itself, but instead defines those contained in objects
 Polymorphism – objects can take on different forms. This is common among malware that modifies
its code as it propagates to avoid detection.

Coupling and Cohesion


 Coupling – how much modules depend on each other
 Cohesion – refers to how the elements of a model belong together. High cohesion reduces
duplication of data
 You want low coupling and high cohesion

The description of the database is called a schema, and the schema is defined by a Data Definition
Language (DDL).

A data definition language (DDL) or data description language (DDL) is a syntax similar to a
computer programming language for defining data structures, especially database schemas.

The data definition language concept and name was first introduced in relation to
the Codasyl database model, where the schema of the database was written in a language
syntax describing the records, fields, and sets of the user data model. Later it was used to refer to a
subset of Structured Query Language (SQL) for creating tables and constraints. SQL-
92 introduced a schema manipulation language and schema information tables to query schemas.
These information tables were specified as SQL/Schemata in SQL:2003. The term DDL is also used
in a generic sense to refer to any formal language for describing data or information structures.

Data Definition Language (DDL) statements are used to define the database structure or schema.
‚¢CREATE - to create objects in the database
‚¢ALTER - alters the structure of the database
‚¢DROP - delete objects from the database
‚¢TRUNCATE - remove all records from a table, including all spaces allocated for the records are
removed
‚¢COMMENT - add comments to the data dictionary
‚¢RENAME - rename an object

The following answers were incorrect:

DCL Data Control Language. Also for Statement


The Data Control Language (DCL) is a subset of the Structured Query Language (SQL) that allows
database administrators to configure security access to relational databases. It complements
the Data Definition Language (DDL), which is used to add and delete database objects, and the Data
Manipulation Language (DML), which is used to retrieve, insert and modify the contents of a
database. DCL is the simplest of the SQL subsets, as it consists of only three commands: GRANT,
REVOKE, and DENY. Combined, these three commands provide administrators with the flexibility to
set and remove database permissions in an extremely granular fashion.

DML The Data Manipulation Language (DML) is used to retrieve, insert and modify database
information. These commands will be used by all database users during the routine operation of the
database. The Data Manipulation Language (DML) is used to retrieve, insert and modify database
information. These commands will be used by all database users during the routine operation of the
database. Some of the command are:
INSERT - Allow addition of data
SELECT - Used to query data from the DB, one of the most commonly used command.
UPDATE - Allow update to existing Data
Quick Review of Policies, Standards, Procedures, baselines & Guidelines:
A standards document would include a list of software that must be installed on each computer. A
comprehensive security program includes the following components:
● Policies
● Procedures
● Standards
● Baselines
● Guidelines
Standards define the technical aspects of a security program, including any hardware and software
that is required. Because standards are mandatory, they must be followed by employees. Standards
should be detailed so that there are no questions as to what technologies should be used. For
example, a standard might state that employees must use a Windows 7 Professional desktop
computer with a multicore processor, 4 gigabytes of memory, and 1 terabyte of disk storage.
Standards help to ensure consistency, which can be beneficial while troubleshooting or when
implementing disaster recovery (DR) procedures.
Policies provide a high-level overview of the company's security posture, creating the basic
framework upon which a company's security program is based. A policy contains mandatory
directives that employees must follow. A well-formed policy should include the following four
elements:
● Purpose – the reason the policy exists
● Scope – the entities that are covered by the policy
● Responsibilities – the things that covered individuals must do to comply with the policy
● Compliance – how to measure the effectiveness of the policy and the consequences for violating
the policy
When creating a policy, you should not be too specific, because the methods for implementing the
policy will change over time. In addition, the policy should not be too technical, because the
technologies that will be covered under the policy will also change over time. By avoiding specific
and technical terms, you will ensure that the policy can survive without updates for a longer period of
time.
Procedures are low-level guides that explain how to accomplish a task. Like policies and standards,
procedures are mandatory. However, unlike policies, procedures are specific, providing as much
detail as possible. A procedure should also be as clear as possible so that there are no questions as
to how each step of the procedure should be followed. For example, a company might have a
procedure for terminating employees. If step 4 of the procedure is to take the terminated employee's
security badge and step 5 is to escort the employee from the building, the employee should not be
escorted from the building until the employee's security badge is surrendered. By following each step
of a procedure, company employees can mitigate social engineering attacks.
[17/02, 5:15 am] S▪G▪: Baselines provide a minimum level of security that a company's employees
and systems must meet. Like standards, baselines help to ensure consistency. However, unlike
standards, baselines are somewhat discretionary. For example, a baseline might state that
employees' computers must run Ubuntu 11.10 or a later version. Employees are not required to use
Ubuntu 11.10; they can use any version of Ubuntu as long as it is version 11.10 or higher. If
employees must use a particular version of software, you should create a standard instead.
Guidelines provide helpful bits of advice to employees. Unlike policies, procedures, and standards,
guidelines are discretionary. As a result, employees are not required to follow guidelines, even
though they probably should. For example, a guideline might contain recommendations on how to
avoid malware. If you want to require that employees follow certain steps to avoid malware, you
should create a procedure; if you want to require that employees use a certain brand or version of
antivirus software, you should create a standard.
SDLC mnemonic >> A Dance in the Dark Every Monday, representing Analysis, Design,
Implementation, Testing, Documentation and Execution, and Maintenance
SOC Reports Quick review:
SOC 1 – Internal Use, available to company auditor’s controller’s office – Not public

SOC 2 – An Audit under 5 criteria the security, availability, or processing integrity of a service
organization’s system or the privacy or confidentiality of the information the system processes.
Available to management and all others under strict NDA – Not widely public

SOC 3 – it is a summary of SOC 2 that describes security, availability, or processing integrity of a


service organization’s system or the privacy or confidentiality – But it is sanitized in order to protect
sensitive information and only publish information that is acceptable to be public.

Type 1 - This is a SOC 1 report but scoped to a particular date. Because it is a date instance, it is
not as telling as a Type 2.

SOC 1 Type 2 - This is a SOC 1 report but scoped to a particular date range. Because it covers a
date range instead of an individual date, it is a more thorough and telling report.

Password Types: A dynamic password changes automatically at consistent intervals. Some


dynamic passwords change at regular intervals of time. Other dynamic passwords change
immediately after a user authenticates. The content of dynamic passwords depends upon the
password complexity requirements of the authenticating system. Some dynamic passwords use a
static user- generated password in combination with a dynamically generated code or key.

A passphrase does not typically change automatically at consistent intervals. However, you can
manually change a passphrase. Passphrases are typically long and contain few random characters.
The sentence-like structure of passphrases makes them easy to remember. The length of a
passphrase can help offset a lack of randomness. However, a passphrase can be easily cracked if it
is a string of text that can be found in a book, song, or other written work. Because some systems
limit the number of characters that can be used in a password or require a certain amount of
complexity and randomness in the string, the creation of passphrases is not always an option.

A one-time password does not typically change automatically at consistent intervals. You cannot
typically change a one-time password. A one-time password is a short string of characters that can
be used only once. One-time passwords can be simple words from a dictionary or, depending on the
password complexity requirements of the authenticating system, a combination of characters that
does not form a dictionary word.

A static password does not change automatically at consistent intervals. However, you can manually
change a static password. A static password is a short string of characters that can be used more
than once. Static passwords can be simple words from a dictionary or, depending upon the
password complexity requirements of the authenticating system, a combination of characters that
does not form a dictionary word.
BCP sequence is
1. Develop a BCP policy statement
2. Conduct a BIA
3. Identify preventive controls
4. Develop recovery strategies
5. Develop an IT contingency plan.
6. Perform DRP training and testing
7. Perform BCP/DRP maintenance.
Wireless Configuration Modes: The wireless clients are configured to use infrastructure mode.
Infrastructure mode, which is also called master mode, is the most commonly used 802.11 wireless
mode. In infrastructure mode, wireless clients use an AP to communicate with other clients.

The wireless clients are not configured to use ad hoc mode. Ad hoc mode, which is sometimes
called peer-to-peer mode, enables two 802.11 wireless clients to communicate directly with each
other without an AP. This mode is useful for sharing files from one computer to another. Some
wireless printers are designed to communicate with print clients by using ad hoc mode.

The wireless clients are not configured to use client mode. Client mode, which is also called
managed mode, is used to allow wireless clients to communicate only with the AP. In client mode,
wireless clients cannot communicate with other clients. This mode is useful for centralized
configuration of wireless devices through the AP.

The wireless clients are not configured to use monitor mode. Monitor mode is used to sniff WLAN
traffic. This is a read-only mode; data can be received but not sent by a wireless client in monitor
mode.
BCP Test: A structured walk-through is the method of DR plan testing that is also known as a table-
top exercise. DR planning involves storing and maintaining data and hardware in an offsite location
so that the alternate assets can be used in the event that a disaster damages hardware and data at
the primary facility. Properly planning for a disaster can help a company recover as quickly as
possible from the disaster. However, plans must be tested to ensure viability before a disaster
occurs. There are five primary methods of testing a DR plan:
● A read-through test
● A structured walk-through / table-top exercise
● A simulation test
● A parallel test
● A full-interruption test
A read-through test involves the distribution of DR plan documents to other members of the DR
team. Each member of the team reviews the DR plan documents to ensure familiarity with the plan,
to identify obsolete or erroneous material, and to identify any DR plan roles that are missing
personnel assignments.
A structured walk-through test, which is also known as a table-top exercise, is a step beyond a read-
through test in that the DR team gathers around a table to role-play the DR plan in person given a
disaster scenario known only to the gathering's moderator. Following the structured walk-through
test, team members review the DR plan to determine the appropriate actions to take given the
scenario.
Software Testing:
A simulation test is a step beyond a structured walk-through in that members of the DR team are
asked to develop a response to a given disaster. The input from the DR team is then considered for
testing against a simulated disaster to determine viability.
A parallel test is a step beyond a simulation test in that employees are relocated to the DR plan's
recovery location. At the location, employees are expected to activate the recovery site just as they
might when faced with a genuine disaster. However, operations at the primary site are never shut
down.
A full-interruption test is a step beyond a parallel test in that employees are relocated to the DR
plan's recovery location and a full shutdown of operations occurs at the primary location. Of all the
DR plan testing methods, a full-interruption test is the least likely to occur because it results in
significant business interruption that might be deemed unnecessary by management.

Smoke Sensors:An electrical charge is a method of fire detection that is typically used by smoke
sensors. Both ionization and photoelectric smoke sensors create an electrical charge that can be
interrupted by the presence of smoke. Ionization smoke sensors use a radioactive emission to
create the charge. Photoelectric sensors create the charge by using a light emitting diode (LED) that
sends a signal to a photoelectric sensor. When smoke interrupts the electric charge, the sensor will
trigger an alarm. Smoke sensors can generate false positives by detecting dust or other airborne
contaminants as smoke.
Infrared light and ultraviolet light are methods of fire detection that are used by flame sensors, not
smoke detectors. Flame sensors work by detecting either infrared or ultraviolet light from fire.
Therefore, a flame sensor typically requires a line of sight with the source of a fire. By themselves,
flame sensors are appropriate for detecting fire only from specific sources.
Temperature is a method of fire detection that is used by heat sensors, not smoke sensors. Heat
sensors work by measuring the ambient temperature of an area. If the temperature exceeds a
predetermined threshold or if the temperature begins to rise faster than a predetermined rate, the
sensor will trigger an alarm.
SDLC Phases: The system will be tested by an independent third party in the acceptance phase.
The SDLC is a series of steps for managing a systems development project. The phases of the
SDLC include all of the following:
1. Initiation and planning is the development of the idea, which includes documenting the system's
objectives and answering questions that naturally flow from the idea.
2. Functional requirements definition is the process of evaluating how the system must function to fit
end-user needs, including any future functionality that might not exist in the system's first release.
3. System design specifications defines how data enters the system, flows through the system, and
is output from the system.
4. Development and implementation involves the creation of the system's source code, including the
selection of a development methodology. In addition, source code is tested, analyzed for efficiency
and security, and documented during this phase.
5. Documentation and common program controls is the process of documenting how data is edited
within the system, the types of logs the system generates, and the system revision process. The
number of controls that are required for a system can vary depending on the system's size, function,
and robustness.
6. Acceptance is the phase at which the system is tested by an independent third party.
The testing process includes functionality tests and security tests, which should verify that the
system meets all the functional and security specifications that were documented in previous
phases.
7. Testing and evaluation controls include guidelines that determine how testing is to
be conducted. The controls ensure that the system is thoroughly tested and does not interfere with
production code or data.
8. Certification is the process in which a certifying officer compares the system against a
set of functional and security standards to ensure that the system complies with those standards.
9. Accreditation is the process by which management approves the system for implementation. Even
if a system is certified, it might not be accredited by management; conversely, even if a system is not
certified, it might still be accredited by management.
10. Implementation is the phase at which the system is transferred from a development environment
to a production environment. The phases of the SDLC are part of a larger process that is known as
the system life cycle (SLC). The SLC includes two additional phases after the implementation phase
of the SDLC:
11. Operations and maintenance support is a post-installation phase in which the system is in use in
a live production environment. During this phase, the system is monitored for weaknesses that were
not discovered during development. In addition, the system's data backup and restore methods are
implemented during this phase.
12. Revisions and system replacement is the process of evaluating the live production system for
any new functionality or features that end users might require from the system. If changes to the
system are required, the revisions should step through the same phases of the SDLC that the
original version followed.
Security incident response is a seven-stage methodology that is used to mitigate the effects of a
security breach. A proper security incident response contains all of the following phases:
1. Detection involves the discovery of a security incident by using log reviews, detective
access controls, or automated analysis of network traffic.
2. Response is the process of activating the incident response team.
3. Mitigation is the process of containing the incident and preventing further damage.
4. Reporting is the process of documenting the security incident so that management and
law enforcement can be fully informed.
5. Recovery is the process of returning the system to the production environment.
Recovered systems should be carefully monitored to ensure that no traces of the agent that caused
the security breach remain on the system.
6. Remediation is the process of understanding the cause of the security breach and
preventing the breach from occurring again. The remediation phase is valuable because it can
identify flaws in a security system or process as well as ways of preventing similar security
incidents.
7. Lessons Learned is the process of reviewing of the incident to determine whether any
improvements in response can be made.
 CVE, Common Vulnerabilities and Exposures, which defines the naming system for describing
vulnerabilities
 CVSS,Common Vulnerability Scoring System, which defines a standard scoring system for
vulnerability severity
 CCE, Common Configuration Enumeration, which defines a naming system for system
configuration problems
 CPE, Common Platform Enumeration, which defines an operating system, application, and
device naming system
 XCCDF, Extensible Configuration Checklist Description Format, which defines a language format
for security checklists
 OVAL, Open Vulnerability and Assessment Language, which defines a language format for
security testing procedures
Here is an IS audit list steps:

1. Determine the goals - because everything else hinges on this.


2. Involve the right business unit leaders - to ensure the needs of the business are identified and
addressed.
3. Determine the scope - because not everything can be tested.
4. Choose the audit team - which may consist of internal or external personnel, depending on the
goals, scope, budget, and available expertise
5. Plan the audit - to ensure all goals are met on time and on budget.
6. Conduct the audit - while sticking to the plan and documenting any deviations therefrom.
7. Document the results - because the wealth of information generated is both valuable and volatile.
8. Communicate the results - to the right leaders in order to achieve and sustain a strong security
posture

Please feel free to send suggestion to me - sonu2hcl@hotmail.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy