0% found this document useful (0 votes)
10 views33 pages

Operating System Learning Outcome 2

The document outlines the concepts of process management, emphasizing the importance of understanding processes in operating systems. It covers various aspects such as process states, concurrency control, process scheduling, and the role of the Process Control Block (PCB) in managing processes. Additionally, it discusses different types of schedulers and scheduling algorithms used to optimize CPU utilization and manage multiple processes efficiently.

Uploaded by

Nicholas Arwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views33 pages

Operating System Learning Outcome 2

The document outlines the concepts of process management, emphasizing the importance of understanding processes in operating systems. It covers various aspects such as process states, concurrency control, process scheduling, and the role of the Process Control Block (PCB) in managing processes. Additionally, it discusses different types of schedulers and scheduling algorithms used to optimize CPU utilization and manage multiple processes efficiently.

Uploaded by

Nicholas Arwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

7.1.1 Learning Outcome 2: Identifying concepts of process management.

7.1.1.1 Introduction to the learning outcome


A computer technician needs to understand how an operating system and application program
works. To do so, one needs to grasp the concept of a process since, there are a number of user
and system processes to be managed in a computer system.

One can view a process from two separate and potentially independent concepts: one relating to
resource ownership and the other one relating to execution (Stallings, 2014). Resource ownership
is where a process includes a virtual address space to hold the process image (a collection of
data, program, stack and attributes defined in the PCB). A process controls or owns resources
such as main memory, I/O channels, I/O devices and data for a while during execution.

Execution of a process entails following an execution path through one or more programs. This
execution can be connected to other processes. It is evident that a process requires a resource
during execution and in a computer system there are more than one process being executed thus
the need to understand process management.

7.1.1.2 Performance Standard


7.1.1.2.1 Concepts of processing are identified and explained.
7.1.1.2.2 Process states are described.
7.1.1.2.3 Definition of Concurrency control and types is done.
7.1.1.2.4 Explanation of Process scheduling and types of schedulers is done.
7.1.1.2.5 Definition of Deadlocks.

7.1.1.3 Information Sheet


Identify and explain concepts of processing
Process management entails creating and deleting processes and also providing mechanisms
for processes to communicate and synchronize with each other in a computer system. A
process therefore can be referred to as a job or the fundamental unit of work in an operating
system. A job is simply a sequence of single programs. Interestingly the term „program‟ and
„job‟ are used interchangeably. According to (Stallings, 2014), a thread is referred to as a
unit of dispatch or a lightweight process. On the other hand, a process or a task is referred to
as the unit of resource ownership
In the event the operating system is required to support multiple concurrent paths of
execution within a single process it is referred to as multithreading. MS-DOS supports a
single user process and a single thread. Other OS supports multiple user processes but only
support one thread process at a time.

Figure 4: Threads and process using a diagram

Source (Stallings, 2014)


Process control block
For an OS to implement and control processes it requires some attributes associated with the
process. These attributes are stored in a data structure called Process Control Block (PCB) also
referred to as process descriptor. The collection of user program, stack, data section and the
attributes form a process image.
Whenever a process is created in the computer system the PCB is created too since it holds all
information about the process needed for its control. The PCB contains: -

Process state

When a program is loaded into the memory and it becomes a process, it can be divided into four

sections namely stack, heap, text and data.

Stack contains temporary data such as method/function parameters and return address; Heap is a
dynamically allocated memory to a process during its runtime; Text contains current activity
represented by the value of PC (Program Counter) and contents of the processor‟s registers; Data
contains global and static variables

A process exists in various states such as New, Ready, Running, Waiting, Halted

Program counter

It indicates the address of the next instruction to be executed for that particular process.

CPU registers

Accumulators, index registers, general-purpose registers, stack pointers comprise the CPU
register. In the event an interrupt occurs CPU registers and program counter information must be
saved to allow a process to be continued correctly

CPU scheduling information

It contains scheduling parameters such as process priority, pointers to scheduling queues etc.

Memory management information

It contains information such as limit registers, value of the base and the page tables/segment
tables.
Accounting information
It contains the amount of CPU and real time used, account numbers, time limits, job or process
numbers etc.

I/O status information


It contains the list of I/O devices allocated to a particular process and a list of open files etc.

Figure 5: Process Control Block

Process State

Process ID

Program Counter

Registers

Memory limits

List of open files

………...

Process state
A process/task is created for a program that needs to be executed. The CPU executes instructions
in sequence dictated by the changing values in the program counter register. A trace is referred
to as listing the sequence of instructions that execute for that particular process. The life of a
process is bounded by its creation and termination. As a process executes, it changes state as
defined by the current activity of that process. The changing process states of a typical activity
from creation to termination is as shown in figure. Typically, activities can be removed from
memory when they are in the waiting, stopped or terminated states.
It is important to note that only one process can be running on any processor at any instant even
though processes may be ready and waiting. A process may exist in one or more of the following
states:
New: This is the state where a process is created. It is the transient state when the activity has
just begun.
Running: This state entails execution of instructions. Activities execute until they are
interrupted by another activity or user command. It is also referred to as Resumed state.
Waiting: It is a stop for an activity that is interrupted and ready to go into background mode.
The process is waiting for some event to occur for example I/O completion, or user starts another
application.
Ready: The process is waiting to be assigned a processor
Terminated: The process has completed the execution. Ideally the activities in this state
disappeared from the user‟s view.

Figure 6: Diagram of Process State

New Terminated

Exit
Admitted

Interrupt

Ready Running

Schedular dispatch

Input/Output
Input/Output or
or event wait
event completion
Waiting
Concurrency control and control types
Inter-process communication
Inter-process communication (IPC) facility provides a mechanism to allow processes to
communicate and to synchronize their actions to prevent Race conditions (a condition where
several processes access and manipulate the same data concurrently and the outcome of the
execution depends on the particular order in which the access takes place).

To guard against this; certain mechanisms are necessary to ensure that only one process at a time
can be manipulating the data. The mechanisms:

1. Critical Section Problem.

It is a segment of code in which a process may be changing common variables, writing a file,
updating a table etc. Each process has its critical section while the other portion is referred to as
Reminder section.

The important feature of the system is that when one has process is executing in its critical
section no other process is to be allowed to execute its critical section. Each process must request
permission to enter its critical section. A solution to critical section problem must satisfy the
following:

 Mutual Exclusion ensures that if a process is executing its critical section, then no other
process can be executing in their critical section.
 Progress ensures that if no process is executing in its critical section and that there exist
some processes that wish to enter critical sections then those processes that are not executing
in their remainder section can be allowed.
 Bounded Waiting ensures there must exist a bound on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted. This helps prevent Busy waiting.
NB: Busy waiting occurs when one process is being executed in its critical section and
process 2 tests to confirm if process 1 is through. It is only acceptable technique when the
anticipated waits are brief; otherwise it could waste CPU cycles.
Techniques used to handle Critical section problem (Synchronization)

1. Semaphores

The main principle behind Semaphores is that two or more processes can operate by means of
simple signals such that a process can be forced to stop at a specified place until it has received a
special signal. Any complex coordinative requirement can be satisfied by the appropriate
structure of signals. For signaling special variables called semaphores are used.

To transmit a signal via semaphores, a process executes the primitive wait; if the corresponding
signal has not yet been transmitted the process is suspended until transmission takes place.

A semaphore can be viewed as a variable that has an integer value upon which three main
operations are defined: -

1. A semaphore may be initialized to a non-negative value


2. The wait operation decrements these semaphore value. If the value becomes negative
the process executing the wait is blocked.
3. The signal operation increments the semaphore value. If the value is not positive then
a process is blocked by await operation is unblocked

A binary semaphore accepts only two values 0 or 1. For both semaphore and binary semaphore a
queue is used to hold processes waiting on the semaphore. A strong semaphore has a mechanism
of removing processes from a queue e.g. using First In First Out (FIFO) policy. A weak
semaphore doesn‟t specify the order in which processes are removed from the queue. Example of
implementation:

2. Message passing

Is a technique that provides a means for co-operating processes to communicate when they are
not using shared memory environment. System calls are used by processes generally to send and
receive messages such as: send (receiver process; message) or Receive (sender process;
message).
A blocking send must wait for receiver to receive the message. A non-blocking send enables the
sender to continue with other processing even if the receiver has not yet received the message.
This requires buffering mechanisms to hold messages until the receiver receives it.
Acknowledgment protocols are used in distributed systems to pass message since
communication can be flawed sometimes. However, on some personal computers passing can be
flawless.

One complication in distributed systems with send / receive message passing is in naming
processes unambiguously so that send and receive calls references the proper process. Process
creation and destruction can be coordinated through some centralized naming mechanisms but
this can introduce considerable transmission overhead as individual machines request permission
to use new names.

3. Monitors

It is a construct of high synchronization characterized by a collection of operations specified by


the programmer. It is made of declarations of variables and bodies of procedures or functions
that implement operations of a given type. It cannot be used directly by the various processes;
ensures that only one process at a time can be active within a monitor.

Figure 7: Schematic view of a monitor


Processes desiring to enter the monitor when it is already in use must wait. This waiting is
automatically managed by the monitor. Data inside the monitor is accessible only to the process
inside it. There is no way for processes outside the monitor to access monitor data. This is called
information hiding.

4. Event Counters

It is an integer counter that does not decrease. Introduced to enable process synchronization
without use of mutual exclusion. It keeps track of number of occurrences of events of a particular
class of related events. Some of the operations that allow processes to reference event counts
include; Advance event, Read event and Await event values.

Advance event signals the occurrence of an event of the class of events represented by
incrementing event count by 1.

Read event obtains value of E; because Advance (E) operations may be occurring during Read
(E) it is only guaranteed that the value will be at least as great as E was before the read started.

Await event blocks the process until the value of E becomes at least V; this avoids the need for
busy waiting.

Other techniques used include:

 Using Peterson‟s algorithm,


 Dekker‟s algorithm and
 Bakely‟s Algorithm

Process scheduling
Process scheduling is required to enable the CPU to execute one process at a time. There are
various types of scheduler modules in the OS that execute at their appropriate time. Process
scheduling is the process of switching the CPU among processes consequently making the
computer more productive.

It is the basis of multi programmed Operating System. Normally several processes are kept in
memory and when one process has to wait, the OS takes the CPU away from that process and
gives the CPU to another process. The process scheduling modules of the OS include: -

 CPU Scheduler
 CPU dispatcher

CPU Schedular
The schedular selects a process from the Ready queue that is to be executed. Scheduling
decisions may take place under the following conditions:

a) When a process terminates.


b) When a process switches from Running state to the Ready state (e.g. when an
interrupt occurs)
c) When a process switches from the Waiting state to the Ready state (e.g. completion of
I/O)
d) When process switches from Running state to Waiting state (e.g. Input/Output
request).
Scheduling algorithm can either be:

(i) Non pre-emptive scheme where the process keeps CPU until when it releases it.
(ii) Pre-emptive scheme where the CPU can be taken from a process when it is in the
middle of updating or processing data by another process.

Types of schedulers
The schedulers are classified according to their frequency of their use in the system. When a
schedular is invoked frequently it is referred to as short-term schedular while when the use is
after a long time it is referred to as long-term. Operating system has many schedulers but the
three main schedulers are: -

SHORT-TERM SCHEDULER
The main objective of this type of schedular is to maximize CPU utilization. It allocates
processes in the Ready queue to CPU for immediate processing. It works more frequently and
faster because it selects a process quite often and executes the process only for a few seconds
before it goes for input output operations.

MEDIUM-TERM
Occasionally the OS need to remove a process from the main memory as this process may be in
need of some I/O operation. So, such processes may be removed (or suspended) from the main
memory to the hard disk. Later on, these processes can be reloaded into the main memory and
continued from where it was left earlier. This saving of suspended process is said to be rolled-
out. And this swapping (in and out) is done by the medium-term scheduler.

LONG-TERM SCHEDULER
This scheduler is responsible for selecting the process from secondary storage device like a disk
and loads them into the main memory for execution. At is also known as a job scheduler. The
primary objective of long-term scheduler is to provide a balance of mix of job to increase the
number of processes in a ready queue. Long term scheduler provides a good performance by
selecting processes with a combination of I/O and CPU bound type. Long-term scheduler selects
the jobs from the batch-queue and loads them into the memory. In memory, these processes
belong to ready queue.

CPU Dispatcher
CPU dispatcher gives control of CPU to the process selected by the scheduler. Its functions
include: -

 Switching to user mode


 Switching context
 Jumping to proper location in the user program to restart that program.

Scheduling levels

Three important levels of scheduling are considered.

a) High level scheduling (Job Scheduling) – determines which jobs should be allowed to
compete actively for the resources of the system. Also called Admission Scheduling.
Once admitted jobs become processes or groups of processes.
b) Intermediate level scheduling – determines which processes shall be allowed to compete
for the CPU.
c) Low level scheduling – determines which ready process will be assigned the CPU when it
next becomes available and actual assigns the CPU to this process.

Various CPU scheduling algorithms are available, thus in determining which is the best is for a
particular situation, the following criteria have been suggested. They include:

(i) CPU utilization – keeps the CPU as busy as possible.


(ii) Throughput – it is the measure of work i.e. no of processes that are completed per time
unit.
(iii) Turnaround time – The interval from time of submission to time of completion. It is sum
of periods spent waiting to get into memory, waiting in the ready queue, executing in
CPU and doing I/O.
(iv) Waiting Time – Sum of periods spent waiting in ready queue.
(v) Response time – Measure of time from submission of a request by user until the first
response is produced. It is amount of time it takes to start responding but not time that it
takes to output that response.
(vi) I/O Bounded-ness of a process – when a process gets the CPU, does it use the CPU only
briefly before generating an I/O request?
(vii) CPU Bounded-ness of a process – when a process gets the CPU does it tend to use the
CPU until its time quantum expires.
Scheduling Algorithms
Scheduling algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it completes
its allotted time. Resources such as CPU cycles, are allocated to the process for the limited
amount of time and then taken away, and the process is again placed back in the ready queue if
that process still has CPU burst time remaining.

Preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime a high priority process enters into a ready state.

When a method switches from running state to ready state or from waiting state to ready state,
preemptive scheduling is used. That process stays in ready queue till it gets next chance to
execute. Algorithms based on preemptive scheduling include: Shortest Remaining Time First
(SRTF), Priority (preemptive version), Round Robin (RR).

Non-Preemptive scheduling is used when a process terminates, or switches from running to


waiting state. Once the CPU has been allocated to a process, it will hold the CPU till it gets
terminated or it reaches a waiting state. Algorithms based on this scheduling technique include;
Priority (non- preemptive version) and Shortest Job First (SJF).

1. First Come First Served (FCFS) / First in First Out (FIFO) Scheduling
It is the process that request CPU first and is allocated the CPU first. Code for FCFS is simple to
write and understand. Under FCFS Average waiting time is often quite long. FCFS is non-
preemptive discipline (once a process has the CPU it runs to completion).

Example

Processes arrive at time 0, with length of the CPU-burst time given in milliseconds.

Table 2: FCFS
Process Burst Time

(Time of CPU use)


P1 24

P2 3

P3 3

If processes arrive in order P1, P2, P3; Using a Gantt chart.

P1 P2 P3

0 24 27 30

Average waiting Time = (0 + 24 +27) / 3 = 17 ms

If processes arrive in order P2, P3, P1


P2 P3 P1

0 3 6 30

Average waiting Time = (0 + 3 + 6) / 3 = 3 ms

From example one, the average waiting time under FCFS policy is generally not minimal and
may vary substantially if processes CPU-Burst times vary greatly. FCFS algorithm is not ideal
for time sharing systems.

2. Shortest Job First (SJF) Scheduling


When the CPU is available it is assigned to the process that has the smallest next CPU burst. If
two processes have the same length next CPU Burst, FCFS scheduling is used to break the tie. It
reduces average waiting time as compared to FCFS.

Example

Table 3: SJF
Process Burst Time
(ms)

P1 6

P2 8

P3 7

P4 3

GANTT chart for SJF:

P4 P1 P3 P2

0 3 9 16 24

Average waiting Time = (0 + 3 + 16 + 9) / 4 = 7 ms

The main disadvantage of SJF is knowing the next CPU request and this information is not
usually available. NB: SJF is non-preemptive in nature.

3. Shortest Remaining Tine (SRT) Scheduling


It is the preemptive counterpart of SJF and is useful in time sharing. Currently executing process
is preempted when a process with a short CPU burst is ready in a queue.

Example

Table 4: SRT
Process Arrival Time Burst Time
(ms)

P1 0 8

P2 1 4
P3 2 9

P4 3 5

Gantt chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

Process Waiting Time

P1 0 + (10 – 1) = 9

P2 (1 – 1) = 0

P3 17 – 2 = 15

P4 5 – 3 =2

Average waiting time = (9 + 0 + 15 + 2) / 4 = 6.5ms

P1 is started at time 0 since it‟s the only process in the queue. P2 arrives at time 1. The remaining
time for P1 (7ms) is larger than time for process P2 (4ms) so P1 is preempted and P2is
scheduled. Average waiting time is less as compared to non-preemptive SJF.

SRT has higher overhead than SJF since it must keep track of elapsed service time of the running
job and must handle occasional preemptions.

4. Round Robin (RR) Scheduling


Similar to FCFS scheduling but processes are given a limited amount of CPU time called time
slice or a quantum which is generally from 10 to 100ms. If the process has a CPU burst of less
than 1-time Quantum, the process itself will release the CPU voluntarily. The processes ready
queue is treated as a circular queue and CPU scheduler goes around the queue, allocating the
CPU to each process for a time of up to 1-time quantum. Using processes in example above and
Time Quantum of 4 Ms.
Gantt chart:

Table 5: Gantt chart


P1 P2 P3 P4 P1 P3 P4 P3

0 4 8 12 16 20 24 25 26

Average waiting time = (9 + 0 + 15 + 2) / 4 = 6.5ms

Process Waiting Time

P1 0 + (16 – 4) = 12

P2 4–1=3

P3 (8 – 2) + (20 – 12) + (25 – 24) = 15

P4 (12 – 3) + (24 – 16) = 17

RR is effective in time sharing environments. The pre-emptive overheads are kept low by
efficient context switching mechanisms and providing adequate memory for the processes to
reside in main storage.

5. Priority Scheduling
SJF is a special case of general priority scheduling algorithms. It is the most common scheduling
algorithms in batch systems. A priority is associated with each process and CPU is allocated to
the process with highest priority. Equal priority processes are scheduled in FCFS order. Priorities
are generally some fixed range of numbers e.g. 0 – 7 or 0 – 4095. Some system uses low
numbers to represent low priority; others use low numbers for high priority. Priorities can be
defined either internationally or externally.

a) For internally defined priorities the following measurable quantities are used – time
limits, memory requirements, no of open files, ration of average I/O burst to CPU burst.
b) Externally defined priorities are set by criteria that are eternal to o/s e.g. importance of
process, type and amount of funds being paid for computer use, department sponsoring
the work and other political factors.
Priority scheduling can either be preemptive or non-preemptive. Major problem is indefinite
blocking or starvation: low priority jobs may wait indefinitely for CPU. Aging technique is used
to gradually increase the priority of process that waits in the system, for a long time.

For example, consider the Processes Pa, Pb, Pc and Pd with the following priority and execution
time.

Process Arrival Time Execute Time Priority Service Time


Pa 0 5 1 9
Pb 1 3 2 6
Pc 2 8 1 14
Pd 3 6 3 0

Pd Pb Pa Pc

0 6 9 14 22

The wait time for each process will be as follows: -

Process Wait Time: (Service Time- Arrival Time)


Pa 9-0=9
Pb 6–1=5
Pc 14 – 2 = 12
Pd 0–0=0

The average Wait time is (9 + 5 + 12 + 0)/4 = 6.5

6. Multi-level Queue Scheduling


Used in a situation in which processes are easily classified into different groups e.g. foreground
(interactive) processes and background (batch) processes. Ready queue is divided into these
groups. Processes are permanently assigned to one queue generally based on some property of
the process such as memory size, process priority or process type.
Each queue has its own scheduling algorithm e.g. foreground -> RR, Background ->FCFS. In
addition, there must be scheduling between the queues which is commonly implemented as a
fixed priority scheduling e.g. fore ground queue may have priority over back ground queue.

7. Multi-level Feedback queue scheduling


Allows a process to move between queues. The idea is to separate processes with different CPU
burst characteristic. If a process uses too much CPU time it will be moved to a lower priority
queue. This scheme leaves I/O bound and interactive processes in the highest priority queues.
Similarly, a process that waits too long in a lower priority queue may be moved to a higher
priority queue. This form of aging prevents starvation.

DEADLOCKS

Occurs when several processes compete for a finite number of resources i.e. a process is waiting
for a particular event that will not occur. The event here may be resource acquisition and release.

Example of Deadlocks

Figure 8: A traffic deadlock – vehicles in a busy section of the city.

Figure 9: A simple resource Deadlock

R1
Held by Request

P
P

R2 Held by
Request
This system is deadlocked because each process holds a resource being requested by the other
process and neither process is willing to release the resource it holds. This leads to deadlock.

1. Deadlock in spooling systems

A spooling system is used to improve system throughput by disassociating, a program from the
slow operating speeds of devices such as printers e.g. lines of text are sent to a disk before
printing starts. If disk space is small and jobs are many a dead lock may occur.

Deadlocks Characterization

Four necessary conditions for deadlock to exist:

1. Mutual Exclusion condition – a process claims exclusive control of resources they require.
2. Wait for Condition (Hold and Wait) –Processes hold resources already allocated to them
while waiting for additional resources.
3. No preemption condition – resources cannot be removed from the processes holding them
until the resources are used to completion.
4. Circular wait condition – A circular chain of processes exists in which each process holds
one or more resources that are requested by the next process in the chain
Major areas of deadlock research in computing

a) Deadlock Detection
b) Deadlock Recovery
c) Deadlock Prevention
d) Deadlock Avoidance
Deadlock detection

This is the process of determining that a deadlock exists and of identifying the processes
involved in the deadlock i.e. determine if a circular wait exists. To facilitate detection of
deadlocks resource allocation graphs are used which indicate resource allocation and requests.
These graphs change as process request resources, acquire them and eventually release them to
the OS.

Reduction of Resource Allocation Graphs is a technique used for detecting deadlocks i.e.
processes that may complete their execution and the processes that will remain deadlocked are
determined. If a process‟s resource request may be granted then we say that a graph may be
reduced by that process (arrows connecting process and resource is removed).

If a graph can be reduced by all its processes then the irreducible processes constitute the set of
deadlocked processes in the graph.

NB: - the order in which the graph reductions are performed does not matter; the final result will
always be the same.

Notations of Resource Allocation and request graphs.

P1 R1
P1 is requesting a resource of type R1

R2
P2
A resource of type R2 has been allocated
to process P2

P3
R3 Process P3 is requesting resource R3 which
P4 has been allocated to process P4
R3

P3 P4 Circular wait (deadlock) process P5 has


been allocated R5 which is requested by P6
that has been allocated R4; that is being

R3

Figure 10: Notations of Resource Allocation


Graph Reduction

Given the resource allocation graph can you determine the possibility of a deadlock?

Figure 11: Graph Reduction

R6
I. Reducing by P9
R6
P8
P8

R7 R7
P9
II.P7Reducing by P7 P9
P7

III. Reducing by P8

R6 R6

P8 P8

R7 R7

P7 P9 P7 P9

No deadlock since circular wait doesn’t exist.

NB: Deadlock detection algorithm should be invoked at less frequent intervals to reduce
overhead in computation time. If it is invoked at arbitrary points there may be many cycles in
resource graph and it would be difficult to tell which of the many deadlocked processes “caused”
the deadlock.
Deadlock recovery

Once a deadlock has been detected several alternatives exist:

a) Ostrich Algorithm (Bury head in the sand and assume things would just work out).
b) Informing the operator who will deal with it manually. Let the system recover
automatically.
There are 2 options for breaking a deadlock:

1. Process termination – killing some processes/processes involved in the deadlock.


Methods available include:

a) Abort all deadlocked processes but at a great expense.


b) Abort one process at a time until the deadlock cycle is eliminated.
A scheduling algorithm may be necessary to identify the process to abort.

Factors that may determine which process is chosen.

 Priority of the process


 Number and type of resources the process has used (is it simple to preempt)
 No of resources it needs to complete.
 Time spent and time remaining.
 Is the process interactive or batch oriented?
2. Resource Preemption- Preempts some resources from processes and give these resources to
other processes until the deadlock cycle is broken. Issues to be addressed include: -
 Selecting a victim i.e. a resource to preempt.
 Roll Back – a process that is preempted a resource needs to be rolled back to some safe
state and restarted from that state once the deadlock is over.
 Starvation – guarantee that resources will not always be preempted from some process.
Deadlock prevention

Aims at getting rid of conditions that cause deadlock. Methods used include:
 Denying Mutual exclusion – should only be allowed on non-shareable resources. For
shareable resources e.g. read only files the processes‟ attempting to open it should be
granted simultaneous access. Thus, for shareable we do not allow mutual exclusion.
 Denying Hold and wait (wait for condition) – To deny this we must guarantee that
whenever a process requests a resource it does not hold any other resources. Protocols
used include:
i) Requires each process to request and be allocated all its resources before it begins
execution. While waiting for resources to be available it should not hold any
resource – may lead to serious waste or resources.
ii) A process only requests for a resource when it has none or it has to release all
resources before getting a resource.
Disadvantages of these protocols

 Starvation is possible – A process that needs several popular resources may have to wait
indefinitely because at least one of the resources that it needs is always allocated to some
other process.
 Resource utilization is low since many of the resources may be allocated but unused for a
long period.
 Denying “No preemption” - If a process that is holding some resource requests another
resource that cannot be immediately allocated to it (i.e. it must wait) then all resources
currently being held are preempted. The process will be started only when it can regain
its old resources as well as the new ones that it is requesting.
 Denying Circular wait - All resources are uniquely numbered and processes must request
resources in linear ascending order. It has been implemented in many OS but has some
difficulties i.e.
- Addition of new resources required rewriting of existing program so as to give the
unique numbers.
- Jobs requiring resources in a different order from that one implemented by o/s; it
required resources to be acquired and held possibly long before they are actually
used (leads to waste)
- Affects a user‟s ability to freely and easily write applications code.
Deadlock avoidance
Dijkstra‟s Bankers Algorithm is the widely used deadlock avoidance technique. Its goal is to
improve less stringent (constraining) conditions than in deadlock prevention in an attempt to get
better resource utilization. Avoidance does not precondition the system to remove all possibility
of deadlock to loom, but whenever a deadlock is approached it is carefully side-stepped.

NB: it is used for same type of resources e.g. tape drives or printers.

Dijkstra‟s Bankers Algorithm says allocation of a resource is only done when it results in a safe
state rather than in unsafe states. A safe state is only in which the total resource situation is such
that all users would eventually be able to finish. An unsafe state is one that might eventually lead
to a deadlock.

Example of a safe state

Assume a system with 12 equivalent tape drives and 3 users sharing the drives:

Table 6: State I
State I

Users Current Loan Maximum Need


User (A) 1 4
User (B) 4 6
User (C) 5 8
Available 2

The state is „safe‟ because it is still possible for all 3 users to finish i.e. the remaining 2 drives
may be given to users (B) who may run to completion after which six tapes would be released for
user (A) and user (C). Thus, the key to a state being safe is that there is at least one way for all
user to finish.

Example of an unsafe state

Table 7: State II
Users Current Maximum
Loan Need
User (A) 8 10

User (B) 2 5

User (C) 1 3

Available 1

A 3-way deadlock could occur if indeed each process needs to request at least one more drive
before releasing any drives to the pool.

NB: An unsafe state does not imply the existence of a deadlock of a deadlock. What an unsafe
state does imply is simply that some an unfortunate of events might lead to a deadlock.

Example of safe state to unsafe state transition

Table 8: State III (Safe)


Users Current Maximum
User (1) 1 4
User (2) 4 6
User (3) 5 8
Available 2
If User (3) requests an additional resource

Table 9: State IV (Unsafe)


Users Current Maximum
Loan Need
User (A) 1 4

User (B) 4 6

User (C) 6 8

Available 1
Therefore, State IV is not necessarily deadlocked but the state has gone from a safe one to an
unsafe one. Thus, Dijkstra‟s Bankers Algorithm, the mutual exclusion, wait-for and No-
preemption conditions are allowed but processes do not claim exclusive use of the resources they
require.

Weaknesses in the Banker’s Algorithm

Some serious weaknesses that might cause a designer to choose another approach to the deadlock
problem.

- Requires that the population of users remain fixed


- Requires that users state their max need in advance. Sometimes this may be
difficult.
- Requires that the Banker grant all requests within a finite time.
- Requires a fixed no of resources to allocate and this cannot be guaranteed due to
maintenance etc.
- The algorithm requires that clients (i.e. jobs) to repay all loans (i.e. return all
resources) within a finite time.
-
7.1.1.4 Learning Activities
Viewing Status of all processes running in a Windows 10 installed computer system
Steps
1. Press “Ctrl + Alt + Delete” and then choose “Task Manager”. Alternatively press “Ctrl +
Shift + Esc” to directly open task manager.
2. To view a list of processes that are running on your computer, click “Processes”. Scroll down
to view the list of hidden and visible programs.
Figure 12: Task Manager
CPU, Memory, Disk, Network and
Power utilization
3. Check the description and the name to identify each process. Check the “memory” column to
see the memory capacity consumed by each process.
4. You can get additional information by searching online for the process name.
5. You can right-click on any active process and choose “End process” to end the program.
6. Press “Windows + S” to open the search pane. Type the name of the program you are trying
to find and it should appear on the returned results. Right click the file in search results and
click “Open file location”. Browse the folder to see the program that the process belongs to.
7. If you cannot find any information about the program, use the file name to search online.
Sites such as processLibrary.com can give you information on whether the program is a
virus, adware, or a spyware.
8. One can disable startup programs to boost your computer speed.
7.1.1.5 Self-Assessment
A. What is the first step when installing Windows onto a system that doesn‟t already have a
functioning operating system?
B. Where is the best place to find Windows hardware compatibility information?
C. Describe the method used to add new hardware to a Windows 10 system if Plug and Play
does not work?
D. There are several algorithms used to allocate processor time to processes to ensure
optimum utilization of the processor and fair distribution of processor time among
competing processes. Describe the following algorithms and using the given table,
graphically illustrate the differences in the processor scheduling using the THREE
algorithms.
a. First Come First Served (FCFS)
b. Shortest Process Next (SPN)
c. Shortest Remaining Time (SRT)

Process Arrival Time Required Service


A 0 3
B 2 6
C 4 4
D 5 5
F 8 2
7.1.1.6 Tools, Equipment, Supplies and Materials
 Functional computer
 Windows operating system
 Note pad

7.1.1.7 Model answers to self-assessment


A. What is the first step when installing Windows onto a system that doesn‟t already have a
functioning operating system? We Partition the disk then format it to prepare it to store
files
B. Where is the best place to find Windows hardware compatibility information?
Manufacturer’s website
C. Describe the method used to add new hardware to a Windows 10 system if Plug and Play
does not work?
1. Run devmgmt.msc
2. Select the hardware device you want to add e.g. Network adapters
3. then click Action menu and select scan for hardware changes
D. There are several algorithms used to allocate processor time to processes to ensure
optimum utilization of the processor and fair distribution of processor time among
competing processes. Describe the following algorithms and using the given table,
graphically illustrate the differences in the processor scheduling using the THREE
algorithms.
a. First Come First Served (FCFS)
b. Shortest Process Next (SPN)
c. Shortest Remaining Time (SRT)
Process Arrival Time Required Service
A 0 3
B 2 6
C 4 4
D 5 5
F 8 2

FCFS (First Come First Served)

 Each process joins the Ready queue


 When the current process ceases to execute, the process which has been in the Ready
queue the longest is selected
SPN (Shortest Process Next)

 Non-preemptive policy
 Process with shortest expected processing time is selected next
 Short process jumps ahead of longer processes

Shortest Remaining Time (SRM)

 Preemptive version of shortest process next policy


 Must estimate processing time and choose the shortest

7.2.2.7 References
Abraham, P. a. (2013). Operating Systems Concepts. United States of America: Wiley.

Abraham, P. G. (2013). Operating Systems Concepts . Hoboken: John Wiley & Sons, Inc.

CompTIA. (2013). CompTIA A+. (Vol. 2013, Issue December 2).


http://certification.comptia.org/getCertified/certifications/a.aspx
Li, A. W. A. and K. (n.d.). Virtual Memory Primitives for User Programs.pdf.

Maccabe, A., Bridges, P., Brightwell, R., & Riesen, R. (n.d.). Recent Trends in Operating
Systems and their Applicability to.

Silberschatz, A., Galvin, P. B., & Gagne, G. (2017). OPERATING SYSTEM ls s s i t.

Stallings, W. (2014). Operating Systems: Internals E Designs Principles. In Journal of Chemical


Information and Modeling.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy