Fundamentals of Cyber Security 2
Fundamentals of Cyber Security 2
An operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware.
A more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application programs.
An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler, a
memory management module, I/O programs, and a file system.
Functions of Operating system – Operating system performs three functions:
1. Provides the facilities to create, modification of programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to machine
language.
3. Provide a loader program to move the compiled program code to the computer’s memory for
execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management –
The module that keeps track of the status of devices is called the I/O traffic controller. Each I/O
device has a device handler that resides in a separate process associated with that device.
The I/O subsystem consists of
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the
time of CPU as they use a single system. These systems are also known as Multitasking Systems.
The task can be from a single user or different users also. The time that each task gets to execute
is called quantum. After this time interval is over OS switches over to the next task.
Advantages of RTOS:
Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another, and
in the latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance to applications
which are in the queue.
Real-time operating system in the embedded system: Since the size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
Limited Tasks: Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
The main task an operating system carries out is the allocation of resources and services, such as
the allocation of memory, devices, processors, and information. The operating system also
includes programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.
Important functions of an operating System:
1. Security –
The operating system uses password protection to protect user data and similar other
techniques. it also prevents unauthorized access to programs and user data.
3. Job accounting –
Operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of users.
6. Memory Management –
The operating system manages the Primary Memory or Main Memory. Main memory is
made up of a large array of bytes or words where each byte or word is assigned a certain
address. Main memory is fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory. An Operating System
performs the following activities for memory management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user
program. The memory addresses that have already been allocated and the memory addresses
of the memory that has not yet been used. In multiprogramming, the OS decides the order in
which processes are granted access to memory, and for how long. It Allocates the memory to
a process when the process requests it and deallocates the memory when the process has
terminated or is performing an I/O operation.
7. Processor Management –
In a multi-programming environment, the OS decides the order in which processes have
access to the processor, and how much processing time each process has. This function of OS
is called process scheduling. An Operating System performs the following activities for
processor management.
Keeps track of the status of processes. The program which performs this task is known as a
traffic controller. Allocates the CPU that is a processor to a process. De-allocates processor
when a process is no more required.
8. Device Management –
An OS manages device communication via their respective drivers. It performs the following
activities for device management. Keeps track of all devices connected to the system.
designates a program responsible for every device known as the Input/Output controller.
Decides which process gets access to a certain device and for how long. Allocates devices in
an effective and efficient way. Deallocates devices when they are no longer required.
9. File Management –
A file system is organized into directories for efficient or easy navigation and usage. These
directories may contain other directories and other files. An Operating System carries out the
following file management activities. It keeps track of where information is stored, user
access settings and status of every file, and more… These facilities are collectively known as
the file system.
Moreover, Operating System also provides certain services to the computer system in one form
or the other.
The Operating System provides certain services to the users which can be listed in the following
manner:
1. Program Execution: The Operating System is responsible for the execution of all types of
programs whether it be user programs or system programs. The Operating System utilizes
various resources available for the efficient running of all types of functionalities.
2. Handling Input/Output Operations: The Operating System is responsible for handling all
sorts of inputs, i.e, from the keyboard, mouse, desktop, etc. The Operating System does all
interfacing in the most appropriate manner regarding all kinds of Inputs and Outputs.
For example, there is a difference in the nature of all types of peripheral devices such as mice
or keyboards, the Operating System is responsible for handling data between them.
3. Manipulation of File System: The Operating System is responsible for making decisions
regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The
Operating System decides how the data should be manipulated and stored.
4. Error Detection and Handling: The Operating System is responsible for the detection of
any type of error or bugs that can occur while any task. The well-secured OS sometimes also
acts as a countermeasure for preventing any sort of breach to the Computer System from any
external source and probably handling them.
5. Resource Allocation: The Operating System ensures the proper use of all the resources
available by deciding which resource to be used by whom for how much time. All the
decisions are taken by the Operating System.
6. Accounting: The Operating System tracks an account of all the functionalities taking place
in the computer system at a time. All the details such as the types of errors that occurred are
recorded by the Operating System.
7. Information and Resource Protection: The Operating System is responsible for using all
the information and resources available on the machine in the most protected way. The
Operating System must foil an attempt from any external resource to hamper any sort of data
or information.
All these services are ensured by the Operating System for the convenience of the users to make
the programming task easier. All different kinds of Operating systems more or less provide the
same services.
Protection plays a very crucial role in a multiuser environment, where several users will be
making concurrent use of the computer resources such as CPU, memory etc. It is the duty of the
operating system to provide a mechanism that protects each process from others.
All the items that require protection in a multiuser environment are lc down as objects and those
that want to access these objects are known as subjects. The operating system grants different
'access rights' to different subjects.
These rights may include read, write, execute, append, delete etc.
1. Domain
A domain is a combination of different objects and a set of different 'access rights' that can be
granted to different subjects to operate on each of these objects. An operating system maintains
several such domains with different combinations of access rights. The user processes can
execute in one of those domains and can access the objects in that domain according to the
access rights given to those objects.
Protection domain
A user process executing in domain 0 has access to read from, write into and execute the file 0
and can write to printer P0. Similarly, the process executing in domain 1 has access to read from
file 1. The printer P1 is common to both domain 1 and domain 2. The processes executing in
domain 1 and domain 2 both can have access to printer P1
In matrix form, the above image can be represented as shown in the below image.
During the execution of a process, it may become necessary for it to access an object, which is in
another domain. If it has a right to access that object it switches to the new domain and accesses
that file. This process is known as domain switching.
The access matrix can be implemented by using either access control lists or capability lists.
Protection matrix
In ACL, the data is stored by column by the operating system. The information about the users
and their access rights for each file is maintained by the operating system. The empty entries are
discarded.
In capability lists, the access control matrix is sliced horizontally by a row. This implies that the
operating system will have to maintain for each user a list of all the objects that the user can
access and the ways in which he can access them. A combination of ACL and capability list
techniques may also be used to design protection mechanisms.
3. Encryption
It is one of the most powerful and important tools of protection. The process of encryption
involves two steps: encryption of the original data into some other form about which nothing is
known to the third person and decryption of the data into the original form from the encrypted
form.
The most commonly used methods to achieve encryption are: transposition ciphers and
substitution ciphers.
In transposition ciphers, the letters in the original message are not changed; only the order in
which they are contained in the original message gets changed. For example, consider that the
message 'it is raining' needs to be encrypted. It will become 'gniniar si ti' in the encrypted form
using a particular form of transposition ciphers algorithm.
The set of characters in the encrypted form will be different from the original ones if we use
substitution ciphers. Every letter may be replaced by its previous alphabet, for instance. Now the
message 'it is raining' would become, after encryption, 'hs hr qzhmhmf'.
It is very easy to implement these ciphers for characters. The varied forms of these algorithms
can be used to encrypt bit steams. For instance, a predetermined bit stream may be added to the
bits in the original stream at a particular position to obtain the encrypted message. The same bit
steam is subtracted at the destination so that the original stream is obtained. This addition and
subtraction may be accomplished with the help of simple adder and subtractor circuits.
The key idea behind the encryption schemes is that the encryption process must be restorable.
Means, once we encrypt the original message to a different form, there should be a way to
restore it to the original form.
Note that only the frameworks for implementing policies and maintaining stable systems are
supported by security systems. It is up to administrators and users to successfully, enforce such
processes.
(Example of System Protection in Operating System)
The function of security is to provide a mechanism that implements policies that determine the
computer system’s use of resources. At the time of system creation, some policies are defined,
some are designed by system management and some are defined by system users to protect their
own files and programs. A threat is a program that is malicious in nature and causes the device to
experience adverse effects. Some of the prevalent threats that happen in a system are –
Virus: Small fragments of code inserted in a device are common viruses. They are very
risky and can corrupt data, delete information, crash systems, etc. By replicating
themselves as needed, they can also spread further.
Trojan Horse: A Trojan horse is able to secretly access a system’s login data. These can
then be used by a malicious user to access the system as a harmless being and wreak
havoc.
Trap Door: A trap door is a violation of security that may be present in a device without
the users’ knowledge. It can be abused by malicious people to damage the data or files in
a system.
Worm: Through using its resources to extreme levels, a worm will kill a machine. It can
create several copies that assert all resources and do not enable them to be accessed by
any other processes. In this way, a worm can shut down a whole network.
Denial of Service: These kinds (types) of attacks do not cause a device to be accessed by
legitimate users. It overwhelms the device with requests so that it is overloaded and other
users cannot operate properly.
Each application has different resource use policies and they can change over time, so device
security is not just a concern of the operating system (OS) designer. The security mechanism
should also be developed by the application programmer to protect their device from misuse. The
concept of least privilege dictates that only enough rights are provided to programs, users, and
systems to perform their tasks. It means that errors do the least amount of damage and cause the
least damage to be done.
Each user is typically granted their own account and only has ample rights to edit their own files.
The root account should not be used for regular day-to-day activities; the system administrator
should still have an ordinary account, and the root account should be reserved for tasks that
require root privileges only. The policy is distinct from mechanism; processes decide how to do
something and policies determine what to do. Over time and location to place, policies are
changed. For the flexibility of the system, the separation of mechanism and policy is essential.
The various techniques that can provide protection and security for various computer systems
are:
Authentication: It deals with defining every consumer in the system and ensuring that
they are who they claim to be. The operating system ensures that before they enter the
system, all the users are authenticated.
One Time Password: For authentication purposes, these passwords provide a lot of
protection. Any time a user wants to access the system, a one-time password can be
created exclusively for a login. It is not possible to use it more than once.
It is possible to interpret a machine as a set of processes and objects. The need to know principle
states that only those objects that it requires to accomplish its mission should be available to a
process and furthermore only in the modes for which it needs access and only during the time
frame when it needs access.
6. Wireless intrusion prevention system (WIPS): It monitors a wireless network for suspicious
traffic by analyzing wireless networking protocols.
7. Network behavior analysis (NBA): It examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service attacks, specific forms of
malware and policy violations.
1. Signature-based detection:
Signature-based IDS operates packets in the network and compares with pre-built and
preordained attack patterns known as signatures.
In order to determine the safety of data from potential violations and cyber-attacks, the
implementation of the security model has an important phase to be carried out. In order to ensure
the integrity of the security model can be designed using two methods:
1. Bottom-Up Approach:
The company’s security model is applied by system administrators or people who are working in
network security or as cyber-engineers. The main idea behind this approach is for individuals
working in this field of information systems to use their knowledge and experience in
cybersecurity to guarantee the design of a highly secure information security model.
Key Advantages –
An individual’s technical expertise in their field ensures that every system vulnerability is
addressed and that the security model is able to counter any potential threats possible.
Disadvantage –
Due to the lack of cooperation between senior managers and relevant directives, it is often
not suitable for the requirements and strategies of the organisation.
2. Top-Down Approach:
This type of approach is initialized and initiated by the executives of the organization.
They formulate policies and outline the procedures to be followed.
Determine the project’s priorities and expected results
Determine liability for every action needed
It is more likely to succeed. That strategy usually provides strong support from top management
by committing resources, a consistent preparation and execution mechanism and opportunities to
affect corporate culture.
Security management issues have been handled by organizations in various ways. Traditionally,
companies adopted a bottom-up approach, where the process is initiated by operational
employees and their results are subsequently propagated to upper management as per the
proposed policies. Since management has no information about the threat, the effects, the idea of
resources, possible returns and the security method, this approach has occasionally created a
sudden and violent collapse.
On the contrary, the top-down approach is a highly successful reverse view of the whole issue.
Management understands the gravity and starts the process, which is subsequently collected
systematically from cyber engineers and operating personnel.
Restarting a system will usually fix an attack that crashes a server, but flooding attacks are more
difficult to recover from. Recovering from a distributed DoS (DDoS) attack in which attack
traffic comes from a large number of sources is even more difficult.
DoS and DDoS attacks often take advantage of vulnerabilities in networking protocols and how
they handle network traffic. For example, an attacker might overwhelm the service by
transmitting many packets to a vulnerable network service from different Internet Protocol (IP)
addresses.
Malicious actors have different ways of attacking the OSI layers. Using User Datagram Protocol
(UDP) packets is one common way. UDP speeds transmission transferring data before the
receiving party sends its agreement. Another common attack method is SYN (synchronization)
packet attacks. In these attacks, packets are sent to all open ports on a server, using spoofed, or
fake, IP addresses. UDP and SYN attacks typically target OSI Layers 3 and 4.
Protocol handshakes launched from internet of things (IoT) devices are now commonly used to
launch attacks on Layers 6 and 7. These attacks can be difficult to identify and preempt because
IoT devices are everywhere and each is a discrete intelligent client.
An enterprise that suspects a DoS attack is underway should contact its internet service provider
(ISP) to determine whether slow performance or other indications are from an attack or some
other factor. The ISP can reroute the malicious traffic to counter the attack. It can also use load
balancers to mitigate the severity of the attack.
ISPs also have products that detect DoS attacks, as do some intrusion detection systems (IDSes),
intrusion prevention systems (IPSes) and firewalls. Other strategies include contracting with a
backup ISP and using cloud-based anti-DoS measures.
There have been instances where attackers have demanded payment from victims to end DoS or
DDoS attacks, but financial profit is not usually the motive behind these attacks. In many cases,
the attackers wish to harm the business or reputation of the organization or individual targeted in
the attack.
Application layer. These attacks generate fake traffic to internet application servers,
especially domain name system (DNS) servers or Hypertext Transfer Protocol (HTTP)
servers. Some application layer DoS attacks flood the target servers with network data;
others target the victim's application server or protocol, looking for vulnerabilities.
Buffer overflow. This type of attack is one that sends more traffic to a network resource than
it was designed to handle.
DNS amplification. In a DNS DoS attack, the attacker generates DNS requests that appear
to have originated from an IP address in the targeted network and sends them to
misconfigured DNS servers managed by third parties. The amplification occurs as the
intermediate DNS servers respond to the fake DNS requests. The responses from
intermediate DNS servers to the requests may contain more data than ordinary DNS
responses, which requires more resources to process. This can result in legitimate users being
denied access to the service.
Ping of death. These attacks abuse the ping protocol by sending request messages with
oversized payloads, causing the target systems to become overwhelmed, to stop responding
to legitimate requests for service and to possibly crash the victim's systems.
State exhaustion. These attacks -- also known as Transmission Control Protocol (TCP)
attacks -- occur when an attacker targets the state tables held in firewalls, routers and other
network devices and fills them with attack data. When these devices incorporate stateful
inspection of network circuits, attackers may be able to fill the state tables by opening more
TCP circuits than the victim's system can handle at once, preventing legitimate users from
accessing the network resource.
SYN flood. This attack abuses the TCP handshake protocol by which a client establishes a
TCP connection with a server. In a SYN flood attack, the attacker directs a high-volume
stream of requests to open TCP connections with the victim server with no intention of
completing the circuits. A successful attack can deny legitimate users access to the targeted
server.
Teardrop. These attacks exploit flaws like how older operating systems (OSes) handled
fragmented IP packets. The IP specification enables packet fragmentation when the packets
are too large to be handled by intermediary routers, and it requires packet fragments to
specify fragment offsets. In teardrop attacks, the fragment offsets are set to overlap each
other. Hosts running affected OSes are then unable to reassemble the fragments, and the
attack can crash the system.
Volumetric. These DoS attacks use all the bandwidth available to reach network resources.
To do this, attackers must direct a high volume of network traffic at the victim's systems.
Volumetric DoS attacks flood a victim's devices with network packets using UDP or Internet
Control Message Protocol (ICMP). These protocols require relatively little overhead to
generate large volumes of traffic, while, at the same time, the victim's network devices are
overwhelmed with network packets, trying to process the incoming malicious datagrams.
What is DDoS and how does it compare to DoS?
Many high-profile DoS attacks are actually distributed attacks, where the attack traffic comes
from multiple attack systems. DoS attacks originating from one source or IP address can be
easier to counter because defenders can block network traffic from the offending source. Attacks
from multiple attacking systems are far more difficult to detect and defend against. It can be
difficult to differentiate legitimate traffic from malicious traffic and filter out malicious packets
when they are being sent from IP addresses seemingly located all over the internet.
In a distributed denial-of-service attack, the attacker may use computers or other network-
connected devices that have been infected by malware and made part of a botnet. DDoS attacks
use command-and-control servers (C&C servers) to control the botnets that are part of the attack.
The C&C servers dictate what kind of attack to launch, what types of data to transmit, and what
systems or network connectivity resources to target with the attack.
Those connected to the internet at the time were mostly research and academic institutions, but it
was estimated that as many as 10% of the 60,000 systems in the U.S. were affected. Damage was
estimated to be as high as $10 million, according to the U.S. General Accounting Office (GAO),
now known as the Government Accountability Office. Prosecuted under the 1986 Computer
Fraud and Abuse Act (CFAA), Morris was sentenced to 400 community service hours and three
years' probation. He was also fined $10,000.
DoS and DDoS attacks have become common since then. Some recent attacks include the
following:
GitHub. On Feb. 28, 2018, GitHub.com was unavailable because of a DDoS attack. GitHub
said it was offline for under 10 minutes. The attack came "across tens of thousands of
endpoints … that peaked at 1.35 terabits per second (Tbps) via 126.9 million packets per
second," according to GitHub.
Imperva. On April 30, 2019, network security vendor Imperva said it recorded a large DDoS
attack against one of its clients. The attack peaked at 580 million packets per second but was
mitigated by its DDoS protection software, the company said.
Amazon Web Services (AWS). In the AWS Shield Threat Landscape Report Q1 2020, the
cloud service provider (CSP) said it mitigated one of the largest DDoS attack it had ever seen
in February 2020. It was 44% larger than anything AWS had encountered. The volume of the
attack was 2.3 Tbps and used a type of UDP vector known as a Connection-less Lightweight
Directory Access Protocol (CLDAP) reflection. Amazon said it used its AWS Shield to
counter the attack.
Viruses, worms, Trojans, and bots are all part of a class of software called "malware." Malware
is short for "malicious software," also known as malicious code or "malcode." It is code or
software that is specifically designed to damage, disrupt, steal, or in general inflict some other
"bad" or illegitimate action on data, hosts, or networks.
There are many different classes of malware that have varying ways of infecting systems and
propagating themselves. Malware can infect systems by being bundled with other programs or
attached as macros to files. Others are installed by exploiting a known vulnerability in an
operating system (OS), network device, or other software, such as a hole in a browser that only
requires users to visit a website to infect their computers. The vast majority, however, are
installed by some action from a user, such as clicking an email attachment or downloading a file
from the Internet.
Some of the more commonly known types of malware are viruses, worms, Trojans, bots,
ransomware, backdoors, spyware, and adware. Damage from malware varies from causing minor
irritation (such as browser popup ads), to stealing confidential information or money, destroying
data, and compromising and/or entirely disabling systems and networks.
In addition to damaging data and software residing on equipment, malware has evolved to target
the physical hardware of those systems. Malware should also not be confused with defective
software, which is intended for legitimate purposes but contains errors or "bugs."
Classes of Malicious Software
Two of the most common types of malware are viruses and worms. These types of programs are
able to self-replicate and can spread copies of themselves, which might even be modified copies.
To be classified as a virus or worm, malware must have the ability to propagate. The difference
is that a worm operates more or less independently of other files, whereas a virus depends on a
host program to spread itself. These and other classes of malicious software are described below.
Ransomware
Ransomware is a type of malicious software that threatens to publish the victim's data or
perpetually block access to it unless a ransom is paid. While some simple ransomware may lock
the system in a way that is not difficult for a knowledgeable person to reverse, more advanced
malware uses a technique called cryptoviral extortion, which encrypts the victim's files, making
them inaccessible, and demands a ransom payment to decrypt them.
Viruses
A computer virus is a type of malware that propagates by inserting a copy of itself into and
becoming part of another program. It spreads from one computer to another, leaving infections as
it travels. Viruses can range in severity from causing mildly annoying effects to damaging data
or software and causing denial-of-service (DoS) conditions. Almost all viruses are attached to
an executable file, which means the virus may exist on a system but will not be active or able to
spread until a user runs or opens the malicious host file or program. When the host code is
executed, the viral code is executed as well. Normally, the host program keeps functioning after
it is infected by the virus. However, some viruses overwrite other programs with copies of
themselves, which destroys the host program altogether. Viruses spread when the software or
document they are attached to is transferred from one computer to another using the network, a
disk, file sharing, or infected email attachments.
Worms
Computer worms are similar to viruses in that they replicate functional copies of themselves and
can cause the same type of damage. In contrast to viruses, which require the spreading of an
infected host file, worms are standalone software and do not require a host program or human
help to propagate. To spread, worms either exploit a vulnerability on the target system or use
some kind of social engineering to trick users into executing them. A worm enters a computer
through a vulnerability in the system and takes advantage of file-transport or information-
transport features on the system, allowing it to travel unaided. More advanced worms leverage
encryption, wipers, and ransomware technologies to harm their targets.
Trojans
A Trojan is another type of malware named after the wooden horse that the Greeks used to
infiltrate Troy. It is a harmful piece of software that looks legitimate. Users are typically tricked
into loading and executing it on their systems. After it is activated, it can achieve any number of
attacks on the host, from irritating the user (popping up windows or changing desktops) to
damaging the host (deleting files, stealing data, or activating and spreading other malware, such
as viruses). Trojans are also known to create backdoors to give malicious users access to the
system. Unlike viruses and worms, Trojans do not reproduce by infecting other files nor do they
self-replicate. Trojans must spread through user interaction such as opening an email attachment
or downloading and running a file from the Internet.
Bots
"Bot" is derived from the word "robot" and is an automated process that interacts with other
network services. Bots often automate tasks and provide information or services that would
otherwise be conducted by a human being. A typical use of bots is to gather information, such
as web crawlers, or interact automatically with Instant Messaging (IM), Internet Relay Chat
(IRC), or other web interfaces. They may also be used to interact dynamically with websites.
Bots can be used for either good or malicious intent. A malicious bot is self-propagating malware
designed to infect a host and connect back to a central server or servers that act as a command
and control (C&C) center for an entire network of compromised devices, or "botnet." With a
botnet, attackers can launch broad-based, "remote-control," flood-type attacks against their
target(s).
In addition to the worm-like ability to self-propagate, bots can include the ability to log
keystrokes, gather passwords, capture and analyze packets, gather financial information,
launch Denial of Service (DOS) Attacks, relay spam, and open backdoors on the infected host.
Bots have all the advantages of worms, but are generally much more versatile in their infection
vector and are often modified within hours of publication of a new exploit. They have been
known to exploit backdoors opened by worms and viruses, which allows them to access
networks that have good perimeter control. Bots rarely announce their presence with high scan
rates that damage network infrastructure; instead, they infect networks in a way that escapes
immediate notice.
Advanced botnets may take advantage of common internet of things (IOT) devices such as home
electronics or appliances to increase automated attacks. Crypto mining is a common use of these
bots for nefarious purposes.
Advanced malware typically comes via the following distribution channels to a computer or
network:
1. Implementing first-line-of-defense tools that can scale, such as cloud security platforms
2. Adhering to policies and practices for application, system, and appliance patching
3. Employing network segmentation to help reduce outbreak exposures
4. Adopting next-generation endpoint process monitoring tools
5. Accessing timely, accurate threat intelligence data and processes that allow that data to be
incorporated into security monitoring and eventing
6. Performing deeper and more advanced analytics
7. Reviewing and practicing security response procedures
8. Backing up data often and testing restoration procedures—processes that are critical in a
world of fast-moving, network-based ransomware worms and destructive cyber weapons
9. Conducting security scanning of microservice, cloud service, and application
administration systems
10. Reviewing security systems and exploring the use of SSL analytics and, if possible, SSL
decryption
KEY POINTS
The sender pays for the transaction upfront at their bank. This party must provide their bank with
the following information:
the recipient's name, address, contact number, along with any other personal information
required to facilitate the transaction
the recipient's banking information, including their account number and branch number
the receiving bank's information, which includes the institution's name, address, and bank
identifier (routing number or SWIFT code)
the reason for the transfer1
Once the information is documented, the wire transfer can begin. The initiating firm sends a
message to the recipient's institution with payment instructions through a secure system, such
as Fedwire or SWIFT. The recipient's bank receives the information from the initiating bank and
deposits its own reserve funds into the correct account. The two banking institutions then settle
the payment on the back end after the money has been deposited.1
Wire transfers are important tools for anyone who needs to send money quickly and securely—
especially when they aren't in the same location. They also allow entities to transfer a large
amount of money. Firms do limit the amount that can be transferred, but these caps tend to be
fairly high. For instance, one company may use a wire transfer to pay for a large purchase from
an international supplier.4
Non-bank wire transfers do not require bank account numbers. One popular non-bank wire
transfer company is Western Union, whose international money transfer service is available in
more than 200 countries.
Types of Wire Transfers
There are two types of wire transfers: domestic and international. Both can be inter- or intra-
bank. The former refers to transfers within the same bank while the latter involves transactions
that take place between two different institutions.
These transactions are generally processed on the same day it is initiated and can be received
within a few hours. That's because a domestic wire transfer only has to go through a
domestic Automated Clearing House (ACH) and can be delivered within a day.
International Wire Transfers
International wire transfers are initiated in one country and settle in another. Senders must
initiate international transfers even when they send money to someone in another country who
has an account at the same bank. These payments require a routing or SWIFT code.2
These wire transfers are normally delivered within two business days. This extra day is required
because international wires must clear a domestic ACH and also its foreign equivalent.
Domestic wire transfers can cost between $25 and $35 per transaction or more. International
wire transfers often cost much more. Some receiving institutions also charge a fee, which is
deducted from the total amount received by their customer.2
International wire transfers that originate in the United States are monitored by the Office of
Foreign Assets Control, an agency of the U.S. Treasury. The agency makes sure the money sent
overseas is not being used to fund terrorist activities or for money laundering purposes. In
addition, they are also tasked with preventing money from going to countries that are the subject
of sanctions by the U.S. government.5
If the agency suspects that any of these scenarios are true, the sending bank has the authority
to freeze the funds and stop the wire transfer from going through.5
Wire transfers may be flagged for several reasons, alerting officials to possible wrongdoing by
either the recipient or the sender:
The sender first pays for the transaction upfront at their bank. The sending bank sends a message
to the recipient's bank with payment instructions through a secure system, such as Fedwire or
SWIFT. The recipient's bank receives all the necessary information from the initiating bank and
deposits its own reserve funds into the correct account.
The two banking institutions then settle the payment on the back end (after the money has
already been deposited).
International wire transfers that originate in the United States are monitored by the Office of
Foreign Assets Control, an agency of the U.S. Treasury. This agency makes sure the money
being sent overseas is not being used to fund terrorist activities or for money laundering
purposes. They are also tasked with preventing money from going to countries that are the
subject of sanctions by the U.S. government.
Electronic voting, a form of computer-mediated voting in which voters make their selections
with the aid of a computer. The voter usually chooses with the aid of a touch-screen display,
although audio interfaces can be made available for voters with visual disabilities. To understand
electronic voting, it is convenient to consider four basic steps in an election process:
ballot composition, in which voters make choices; ballot casting, in which voters submit their
ballots; ballot recording, in which a system records the submitted ballots; and tabulation, in
which votes are counted. Ballot casting, recording, and tabulation are routinely done with
computers even in voting systems that are not, strictly speaking, electronic. Electronic voting in
the strict sense is a system where the first step, ballot composition (or choosing), is done with the
aid of a computer.
There are two quite different types of electronic voting technologies: those that use
the Internet (I-voting) and those that do not (e-voting).
I-voting
As use of the Internet spread rapidly in the 1990s and early 21st century, it seemed that the
voting process would naturally migrate there. In this scenario, voters would cast their choices
from any computer connected to the Internet—including from their home. This type of voting
mechanism is sometimes referred to as I-voting. Beyond voting in regularly scheduled elections,
many saw in the emergence of these new technologies an opportunity to transform democracy,
enabling citizens to participate directly in the decision-making process. However, many
countries decided that the Internet was not secure enough for voting purposes. Limited I-voting
trials have been undertaken in some countries, including Estonia, Switzerland, France, and the
Philippines. The case of Estonia is especially enlightening: although the
country’s infrastructure for digital democracy is highly developed, use of the Internet has been at
times massively disrupted by denial-of-service attacks. This has forced the country to maintain
its traditional voting infrastructure alongside the I-voting option.
Besides denial-of-service attacks on the Internet, security experts worry that many personal
computers are vulnerable to penetration by various types of malware (malignant software). Such
attacks can be used to block or substitute legitimate votes, thereby subverting the electoral
process in a possibly undetected way.
Get a Britannica Premium subscription and gain access to exclusive content.Subscribe Now
A third concern about I-voting relates to the possibility of voter coercion and vote selling, which
in principle can more easily occur when voting does not take place in a controlled environment.
However, there is no consensus about the seriousness of this problem in stable democracies.
Furthermore, this objection also applies to absentee ballots, which have been broadly used in the
past, as well as vote-by-mail.
E-voting
Because of security and access concerns, most large-scale electronic voting is currently held in
designated precincts using special-purpose machines. This type of voting mechanism is referred
to as e-voting. There are two major types of e-voting equipment: direct recording electronic
(DRE) machines and optical scanning machines.
A typical DRE is composed of a touch screen connected to a computer. Ballots are presented to
the voters on the touch screen, where they make their choices and cast their ballot. The touch-
screen display can be used to assist the voter in a variety of ways, which include displaying large
fonts and high contrast for those with limited vision, alerting the voter to undervotes, and
preventing overvotes.
A DRE directly records the cast ballots and stores the data in its memory. Thus, a single machine
is used for composition, casting, and recording of votes. The third step, recording of the cast
ballot in a memory device, is invisible to the voter. Assurance that the vote is recorded as cast
relies on testing of the machine’s hardware and software before the election and confidence that
the software running during the election is the same software as the one tested before the
election. Both of these are subjects of much controversy.
Whereas testing for faults in hardware or unintentional errors in software can be highly reliable,
the same is not true for malicious software. Most security professionals believe that an insider
attack at the software development stage could make it to the final product without being
detected (although there is disagreement about the likelihood of such an attack). This problem
is compounded by the fact that source code is usually not made available for public scrutiny.
Cryptographic techniques can partially solve the problem of software authentication. When the
software is evaluated and certified, a cryptographic hash (a short string of bits that serves as a
type of “signature” for the computer code) can be computed and stored. Just before running the
election, the hash is recomputed. Any change in the certified software will cause the two hashes
to be distinct. This technique, however, may fall short of preventing all attacks on
software integrity.
Computer viruses can infect a machine during an election. For this to happen, the machine must
somehow interact with another electronic device. Thus, connection to the Internet or to wireless
devices is usually disallowed. However, a voting session is typically initiated through the use of
an activation card. A poll worker, upon verification of eligibility, sets the card to enable one
voting session. After the session the voter returns the card to the poll worker for reuse. At least
one DRE system has been shown to be vulnerable to infection using the activation card. An
infected machine can be made to record votes not as they were cast.
The threat of DREs not recording the votes as cast has led some individuals and organizations to
argue that a paper audit record must be produced for each cast ballot. DRE manufacturers
responded by adding a printer capability to their DREs. The resulting systems produce both an
electronic record and a paper record. However, problems in handling and monitoring the paper
record, both by voters and by election officials, have led to much criticism of these hybrid
systems. Many jurisdictions have discarded them in favour of optical scanning technology.
In some optical scanning systems the voter fills out a paper ballot and inserts it into an electronic
scanning device. Scanners can reject improperly marked ballots, allowing the voter to start over,
thereby reducing discarded votes.
In other optical scanning systems voters compose their votes on a computer screen. Once a ballot
is completed, the computer prints an optical scanning ballot. The voter verifies the ballot and
then inserts it in another device that scans and tabulates the vote. Both these systems are
considered electronic voting systems.
None of the above electronic voting systems is completely secure. Opinions differ widely on
whether the posited threats are realistic enough to warrant forgoing the added functionalities of
electronic voting in favour of the perceived security of nonelectronic voting systems.
Cryptographers, on the other hand, have devised systems that allow voters to verify that their
votes are counted as cast. Additionally, these systems do not enable the voter to prove to a third
party how they voted (thus reducing the risks of vote selling and coercion). These cryptographic
systems, called end-to-end (E2E) secure, are the preferred systems from a security point of view.
Thus, there is considerable academic interest in fully developing these systems. On the other
hand, some people argue against E2E systems on the grounds that their mathematical
underpinnings are not comprehensible to the average voter.
The OWASP Top Ten, contains the most critical web application security vulnerabilities, as
identified and agreed upon by security experts from around the world.
By being aware of them, how they work, and coding in a secure way the applications that we
build stand a far better chance of not being breached. Doing so also helps you avoid being on
any end of year hack list or feature in the list of recent top breaches.
The list, surprisingly, doesn’t change all that often. Sadly, many of the same issues seem to
remain year after year, despite an ever growing security awareness within the developer
community.
This is both a blessing and a curse. As they don’t change often, you can continue to review the
preparedness of your application in dealing with them. Here’s the latest list of the top ten web
application security vulnerabilities.
Let’s assume that you take the OWASP Top Ten seriously and your developers have a security
mindset. Let’s also assume that they self-test regularly to ensure that your applications are not
vulnerable to any of the listed breaches. You may even have a security evangelist on staff.
While these are all excellent, foundational steps, often they’re not enough. This is because of
preconceived biases and filters. Your team lives and breathes the code which they maintain each
and every day. Because of that, over time, they’ll not be able to critique it objectively.
Increasingly, your team will be subjective in their analysis of it.
It’s for this reason that it’s important to get an independent set of eyes on the applications. By
doing so, they can be reviewed by people who’ve never seen them before, by people who won’t
make any assumptions about why the code does what it does, or be biased by anything or anyone
within your organization either.
Additionally, they will be people with specific, professional application security experience, who
know what to look for, including the obvious and the subtle, as well as the hidden things. They’ll
also be abreast of current security issues and be knowledgeable about issues which aren’t
common knowledge yet.
This can be potentially daunting if you’re a young organization, one recently embarking on a
security-first approach. But, setting concerns aside, security audits can help you build secure
applications quicker than you otherwise might.
Now that you’ve gotten a security audit done, you have a security baseline for your application
and have refactored your code, based on the findings of the security audit, let’s step back from
the application.
Invariably something will go wrong at some stage. There’ll be a bug that no one saw (or
considered severe enough to warrant particular attention) — one that will eventually be
exploited.
When that happens, to be able to respond as quickly as possible — before the situation gets out
of hand — you need to have proper logging implemented.
Doing so provides you with information about what occurred, what lead to the situation in the
first place, and what else was going on at the time. As the saying goes: proper preparation
prevents poor performance.
To do so, first, ensure that you’ve sufficiently instrumented your application. Depending on your
software language(s), there is a range of tools and services available,
including Tideways, Blackfire, and New Relic.
Secondly, store the information so that it can be parsed rapidly and efficiently when the time
comes. There is a range of ways to do this. From simple solutions such as the Linux syslog, to
open source solutions such as the ELK stack (Elasticsearch, Logstash, and Kibana), to SaaS
services such as Loggly, Splunk, and PaperTrail.
Regardless of what you use, make sure that the information is being stored and that it’s able to
be parsed quickly and efficiently when the time comes to use it.
Any consideration of application security would be incomplete without taking classic firewalls
and web application firewalls (WAFs) into consideration.
WAFs fall short for a number of reasons, including that they can generate a large number of false
positives and negatives, and can be costly to maintain. However, they do afford some level of
protection to your application.
So, if you want to use a WAF, It is suggested that you either use them in addition to a Runtime
Application Self-Protection (RASP) tool, or use Application Security Management
platforms such as Sqreen that can provide RASP and in-app WAF modules tuned to your needs,
to provide real-time security monitoring and protection.
That way, you can protect your application from a range of perspectives, both internal and
external.
5. Encrypt everything
Now that your application’s been instrumented and has a firewall solution to help protect it, let’s
talk about encryption. And when I say encryption, I don’t just mean using HTTPS and HSTS.
I’m talking about encrypting all the things.
I believe it’s important to always use encryption holistically to protect an application. This might
seem a little Orwellian, but it’s important to consider encryption from every angle, not just the
obvious or the status quo.
It’s great that services such as Let’s Encrypt are making HTTPS much more accessible than it
ever was before. And it’s excellent that such influential companies as Google are rewarding
websites for using HTTPS, but this type of encryption isn’t enough.
It’s important to also make sure that data at rest is encrypted as well. HTTPS makes it next to
impossible for Man In The Middle (MITM) attacks to occur.
But if someone can get to your server (such as a belligerent ex-staffer, dubious systems
administrator, or a government operative) and either clone or remove the drives, then all the
other security is moot.
So, please don’t look at security in isolation, or one part of it. Look at it holistically and consider
data at rest, as well as data in transit.
6. Harden everything
Now that all traffic and data is encrypted, what about hardening everything? From operating
systems to software development frameworks you need to ensure that they’re sufficiently
hardened.
This is too complex a topic to cover in the amount of space I have available in this article. So
let’s instead consider a concise list of suggestions for both operating systems and frameworks.
Is your web server using modules or extensions that your application doesn’t need?
Is your software language using modules or extensions that it doesn’t need?
Does your software language allow remote code execution, such as exec and proc to occur?
What’s the maximum script execution time set to?
What access does your software language have to the filesystem?
Where is session information being stored?
Are the servers, services (such as MySQL, PostgreSQL, and Redis) and software language
configuration files write protected?
Are your servers using security extensions such as SELinux or AppArmor?
Is incoming and outgoing traffic restricted?
What users are allowed to access the server and how is that access managed?
How do your servers, services, and software language configurations fare? This is a complex
topic. So, here is a short list of best practice guides to refer to:
Ruby on Rails Security Guide
PHP Security Checklist
Ruby Security Handbook
Python Security
Node.js Security Handbook
Hardening the Linux server
In addition to ensuring that your operating system is hardened, is it up to date? It could very well
be hardened against the current version, but if the packages are out of date (and as a result
contain vulnerabilities), then there’s still a problem.
Make sure that your servers are set to update to the latest security releases as they become
available. I’m not suggesting updating each and every package, but at least the security-specific
ones.
Depending on your organization’s perspective, you can elect to automate this process.
Alternatively, you can review and approve updates individually.
If you’re not using one of these, please refer to the documentation for your operating system or
distribution.
As well as keeping the operating system up to date, you need to keep your application
framework and third party libraries up to date as well.
Some people may scoff at the thought of using a framework. That’s not a debate that I’m going
to engage in today, suffice to say that they both have their place, and when used well, can save
inordinate amounts of time and effort.
Frameworks and third-party software libraries, just like operating systems, have vulnerabilities.
If they’re properly supported, then they will also be rapidly patched and improved. Given that,
it’s important to ensure that you’re using the latest stable version — if at all possible.
Most languages, whether dynamic ones such as PHP, Python, and Ruby, or static ones such
as Go, have package managers. These tools make the process of managing and maintaining
external dependencies relatively painless, as well as being automated during deployment.
Ensure that you take advantage of them and stay with as recent a release as is possible.
This is strongly tied to the previous point. Given the number of attack vectors in play today,
vectors such as Cross-site scripting, code injection, SQL injection, insecure direct object
references, and cross-site request forgery it’s hard to both stay abreast of them as well as to
know what the new ones are.
But, such is life. Given the world in which we live and the times in which we operate, if we want
to build secure applications we need to know this information. Gladly, there are a range of ways
in which we can get this information in a distilled, readily consumable fashion.
Sqreen does a bi-weekly newsletter roundup of interesting security articles you can subscribe to.
Here is a list of blogs and podcasts you can regularly refer to, to stay up to date as well:
Blogs
Troy Hunt: The Australian Microsoft Regional Director and MVP. He also tweets
at @troyhunt.
Krebs on Security by Brian Krebs. Brian is an independent investigative journalist,
specializing in cybercrime. He also tweets at @briankrebs.
Dark Reading: one of the most widely read cybersecurity news sites. It reports on attacks
and the key ways to defend yourself against them.
The Guardian’s Data and Computer Security section. An excellent source of the latest
information on what’s happening around the world with respect to security.
Schneier on Security by Bruce Schneier. Bruce’s been writing about security since 2004 and
is the Chief Technology Officer of Resilient and a board member of the EFF (Electronic
Frontier Foundation).
Podcasts
OWASP Podcast
Crypto-Gram Security Podcast
Risky Business
Down the Security Rabbithole
Defensive Security
Finally, perhaps this is a cliché, but never stop learning. You may be all over the current threats
facing our industry. But that doesn’t mean that new threats aren’t either coming or being
discovered.
Given that, make sure that you use the links in this article to keep you and your team up to date
on what’s out there. Then, continue to engender a culture of security-first application
development within your organization.
That way, you’ll always have it as a key consideration, and be far less likely to fall victim to
security or data breaches.
Learn how to mitigate threats by shrinking the application attack surface across environments
Application security describes security measures at the application level that aim to prevent
data or code within the app from being stolen or hijacked. It encompasses the security
considerations that happen during application development and design, but it also involves
systems and approaches to protect apps after they get deployed.
Application security may include hardware, software, and procedures that identify or minimize
security vulnerabilities. A router that prevents anyone from viewing a computer’s IP address
from the Internet is a form of hardware application security. But security measures at the
application level are also typically built into the software, such as an application firewall that
strictly defines what activities are allowed and prohibited. Procedures can entail things like an
application security routine that includes protocols such as regular testing.
Authentication: When software developers build procedures into an application to ensure that
only authorized users gain access to it. Authentication procedures ensure that a user is who they
say they are. This can be accomplished by requiring the user to provide a user name and
password when logging in to an application. Multi-factor authentication requires more than one
form of authentication—the factors might include something you know (a password), something
you have (a mobile device), and something you are (a thumb print or facial recognition).
Authorization: After a user has been authenticated, the user may be authorized to access and use
the application. The system can validate that a user has permission to access the application by
comparing the user’s identity with a list of authorized users. Authentication must happen before
authorization so that the application matches only validated user credentials to the authorized
user list.
Encryption: After a user has been authenticated and is using the application, other security
measures can protect sensitive data from being seen or even used by a cybercriminal. In cloud-
based applications, where traffic containing sensitive data travels between the end user and the
cloud, that traffic can be encrypted to keep the data safe.
Logging: If there is a security breach in an application, logging can help identify who got access
to the data and how. Application log files provide a time-stamped record of which aspects of the
application were accessed and by whom.
Application security testing: A necessary process to ensure that all of these security controls
work properly.
Wednesday, June 27, 2018, 10:10 AMWith an anticipated 20 billion devices connected to the
internet by 2020, cybersecurity has become a core component of homeland security.
Complicating the threat picture, nation-states have begun to use proxies, and malicious actors
with apparent criminal and nation-state affiliations now engage in online criminal activity. In
2015, an intrusion into a federal agency resulted in the compromise of over 4 million federal
employees’ personnel records, affecting nearly 22 million people. The proliferation of internet-
of-things devices increases the chances that cyberactivity and ransomware incidents—such
as WannaCry and NotPetya—will have serious kinetic consequences.
Central to Homeland Security’s strategy is a better understanding of global cyberthreats and how
they affect the United States. The department plans to work with sector-specific agencies, such
as the Department of Defense and the General Services Administration, and cybersecurity firms
that are not affiliated with the federal government. DHS will develop plans both to address gaps
in its preparedness to handle existing threats and to predict future risks.
DHS will work to reduce organizational and systemic vulnerabilities across the federal
government and empower its stakeholders to better manage their cybersecurity risks. DHS works
with the Office of Management and Budget (OMB) to address risks across agencies. In leading
the effort to secure the federal government, as well as protecting its own information systems,
DHS intends to triage the risks the government faces. Additionally, DHS will continue close
collaboration with the General Services Administration, the National Institute of Standards and
Technology, and those entities responsible for protecting military and intelligence networks.
In order to reduce federal agencies’ vulnerabilities, DHS plans to improve the governance model
for federal cybersecurity, information-security policies, and oversight. DHS will continuously
provide feedback on federal information-technology policies and government-wide policies and
programs that affect cybersecurity. It will further clarify the distribution of responsibilities
between OMB, DHS and other agencies, with the goal of developing and implementing a clear
governance model for federal cybersecurity. DHS will also try to increase compliance with
information-security policies and accountability for missteps, and assess federal government and
individual-agency risks.
Additionally, the department plans to preempt cyberthreats to itself and other government
agencies. DHS plans to centralize protective capabilities and offer additional cybersecurity tools
and services to agencies in response to emerging or identified threats. In addition, DHS will
create performance metrics to measure the effectiveness of its cybersecurity capabilities, tools,
and services. Last, as it increasingly leverages cloud and shared services, DHS will continue to
explore new ways to protect DHS systems that may be scalable across the federal government.
To address significant national risks to critical infrastructure, DHS plans to evaluate its current
cybersecurity risk-management offerings, identify and prioritize gaps in those offerings and in
personnel engagement, and address the gaps by providing tools and services to critical-
infrastructure owners and operators. To effectively leverage field personnel to adopt
cybersecurity risk management best practices, including the National Institute of Standards and
Technology’s Framework for Improving Critical Infrastructure Cybersecurity, DHS is prepared
to engage with officials at the appropriate levels.
To improve the sharing of cyberthreat indicators, defensive measures, and other cybersecurity
information, DHS intends to expand automated mechanisms that receive, analyze, and share
threat information. The department also plans to improve its own ability to analyze, correlate and
enrich cybersecurity information, and improve its information-sharing mechanisms, including
those that allow access to U.S. government information.
DHS intends to maintain relevant expertise, mature existing partnerships, and continue to
integrate resources for the ten critical-infrastructure sectors for which it is responsible. It will
assess and update DHS policies and regulations to address cybersecurity risk, and it will support
each sector in integrating cyber and physical resources.
In the past, DHS has been a leader in integrating traditional law-enforcement methods to
strengthen cybersecurity, as demonstrated through its electronic crimes task forces. DHS further
plans to prevent, disrupt, and counter cybersecurity threats to persons, events, and infrastructure
through strengthening its ability to apply its full range of authorities and implementing detection
and protection measures to appropriately secure key systems and assets.
DHS plans to collaborate with other law enforcement agencies, strengthen its collaboration with
private industry and academia, and bolster its international law enforcement partnerships and
their capabilities for cyber crime investigations and digital forensics.
DHS will invest in cutting-edge technical resources and advanced law enforcement capabilities
for both itself and its partners.
DHS will limit the impact of cyber incidents through coordinated, community-wide response
efforts. When cyber incidents occur, DHS currently assists through both asset response—
technical assistance to affected entities and other at-risk assets—and threat response—
investigating the underlying crimes. DHS plans to implement information-sharing mechanisms
to ensure that asset and threat responders communicate with each other, sector-specific agencies,
and the private sector; in the case of significant cyber incidents, DHS will ensure preparedness
for a coordinated government-wide response.
To better assist victims after cyber incidents,, DHS plans to encourage voluntary reporting of
cyber incidents and improve victim notification.. As the lead agency for asset response, part of a
Cyber Unified Coordination Group, and a support to the White House-led Cyber Response
Group, DHS provides critical asset-response assistance following cyber incidents. To expand
asset response capabilities and mitigate cyber incidents, DHS plans to establish a common
operating picture across the department and with other stakeholders, and to support emergency
management efforts under the National Response Framework.
To increase coordination between incident responders, DHS will leverage both DHS and non-
DHS investigative resources to provide incident and threat attribution information to federal
incident responders and sector-specific agencies. DHS will also develop holistic assessments of
adversaries, threats, and incidents, increase field-level collaboration, and coordinate federal
response assistance where appropriate.
DHS will support policy and operational efforts that make the “cyber ecosystem” more secure
and reliable. DHS describes the cyber ecosystem as including not only cyberspace—the
interdependent network of information technology infrastructure—but also the people,
environment, norms, and conditions that influence that space. DHS plans to invest in research
and development efforts that support its mission, and to more quickly expand its cyber personnel
programs.
To strengthen the security and reliability of the ecosystem, DHS aims to foster improved
cybersecurity in software, hardware, services, and technologies, and to build more resilient
networks. DHS will support the development of technical, operational, and policy innovations,
and develop solutions to identify and manage supply chain risks for stakeholders. DHS further
plans to engage with stakeholders to enhance the cybersecurity of cloud infrastructure, internet-
of-things products, and other emerging technologies.
Additionally, DHS plans to prioritize research, development, and technology transition activities
that support incident response, information sharing, and other cybersecurity objectives. It will
identify, develop, and transition new capabilities that will enable DHS to protect critical systems,
investigate cyber crimes, and respond to cyber incidents.
DHS also plans to expand international collaboration to advance its objectives and promote an
open, interoperable, secure, and reliable internet. DHS aims to improve international cooperation
and build capacity by sharing best practices, cybersecurity information, expertise, and technical
assistance. Its anticipates that the expansion of this international collaboration will result in
shared global approaches to cybersecurity and increased risk management capabilities.
With a critical shortage of cybersecurity talent globally, DHS also endeavors to improve
recruitment, education, training, and retention to develop a world-class cyber workforce. DHS
will continue to support efforts to increase the supply of cybersecurity talent through cyber
education programs and the National Initiative for Cybersecurity Education. It will also continue
to develop and promote cybersecurity training programs, working in particular to drive
approaches to recruitment and retention. DHS plans to develop a cutting-edge network protection
and cyber investigative workforce.
DHS aims to prioritize and evaluate the effectiveness of its cybersecurity programs and activities
in accordance with its Cybersecurity Strategy. It will then identify and address gaps within the
strategy, ultimately ensuring that the cybersecurity programs address the department’s goals and
objectives.