OS Notes - BK
OS Notes - BK
OS Notes - BK
Shubham Kumaram
1
II Process 21
6 Introduction to Process 22
6.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Process State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.3 Process Control Block . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4 Process Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.1 Scheduling Queues . . . . . . . . . . . . . . . . . . . . . . 24
6.4.2 Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 Interprocess Communication 26
7.1 Types of Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.1.1 Independent Process . . . . . . . . . . . . . . . . . . . . . 26
7.1.2 Cooperating Process . . . . . . . . . . . . . . . . . . . . . 26
7.2 Shared Memory System . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Message Passing System . . . . . . . . . . . . . . . . . . . . . . . 27
7.3.1 Direct or Indirect Communication . . . . . . . . . . . . . 27
7.3.2 Synchronous or Asynchronous Communication . . . . . . 28
7.3.3 Automatic or Explicit Buffering . . . . . . . . . . . . . . . 28
8 Threads 29
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.2 Advantages of Threads . . . . . . . . . . . . . . . . . . . . . . . . 29
8.3 Multi-threading Models . . . . . . . . . . . . . . . . . . . . . . . 30
8.3.1 Many-to-One Model . . . . . . . . . . . . . . . . . . . . . 30
8.3.2 One-to-One Model . . . . . . . . . . . . . . . . . . . . . . 30
8.3.3 Many-to-Many Model . . . . . . . . . . . . . . . . . . . . 30
8.4 Thread Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9 Process Scheduling 32
9.1 Preemptive and Non-preemptive Scheduling . . . . . . . . . . . . 32
9.2 Dispatcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.3 Scheduling Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.4 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 33
9.4.1 First Come First Served scheduling . . . . . . . . . . . . . 33
9.4.2 Shortest Job First scheduling . . . . . . . . . . . . . . . . 34
9.4.3 Priority Scheduling . . . . . . . . . . . . . . . . . . . . . . 34
9.4.4 Round Robin Scheduling . . . . . . . . . . . . . . . . . . 35
9.4.5 Multilevel Queue Scheduling . . . . . . . . . . . . . . . . 36
9.4.6 Multilevel Feedback Queue Scheduling . . . . . . . . . . . 36
10 Process Synchronization 37
10.1 Important Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.1.1 Race Condition . . . . . . . . . . . . . . . . . . . . . . . . 37
10.1.2 Critical Section Problem . . . . . . . . . . . . . . . . . . . 37
10.1.3 The Problem of Busy Wait . . . . . . . . . . . . . . . . . 38
10.2 Classical Process Synchronization Problems . . . . . . . . . . . . 38
10.2.1 Producers-Consumers with bounded buffers . . . . . . . . 38
10.2.2 Dining Philosophers Problem . . . . . . . . . . . . . . . . 38
10.3 Approaches to Implement Critical Sections . . . . . . . . . . . . 39
2
10.3.1 Algorithmic Approach . . . . . . . . . . . . . . . . . . . . 39
10.3.2 Semaphores . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.3.3 Test-and-Set(TS) Instruction . . . . . . . . . . . . . . . . 41
11 Deadlocks 47
11.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.2 Deadlock Characterization . . . . . . . . . . . . . . . . . . . . . . 47
11.2.1 Necessary Conditions . . . . . . . . . . . . . . . . . . . . 47
11.2.2 Resource Allocation Graph . . . . . . . . . . . . . . . . . 47
11.3 Methods for Handling Deadlocks . . . . . . . . . . . . . . . . . . 48
11.3.1 Deadlock Prevention . . . . . . . . . . . . . . . . . . . . . 49
11.3.2 Deadlock Avoidance . . . . . . . . . . . . . . . . . . . . . 49
11.3.3 Deadlock Detection . . . . . . . . . . . . . . . . . . . . . 51
11.3.4 Recovery from Deadlock . . . . . . . . . . . . . . . . . . . 53
3
List of Algorithms
4
Part I
Introduction to Operating
System
5
Chapter 1
Introduction
4. Users
The Hardware (CPU, memory, and I/O devices) provide the basic
computing resources for the system
The Operating System controls and coordinates the use of the hardware
among the various application programs for the various users
Next we describe the basic computer architecture that makes it possible to
write a functional operating system.
6
User 1 User 2 User 3 ...... User n
Operating System
Computer Hardware
1.4 Goals of an OS
1. Efficient use of a computer system
2. User convenience
1.5 Functions/Roles/Operations of an OS
An Operating System implements computational requirements of its users with
the help of resources of the computer system. Its key concerns are described as
follows:
7
Concern OS responsibility/Function
Programs Initiation and termination of programs. Providing
convenient methods so that several programs can
work towards a common goal
Resources Ensuring availability of resources in the system and
allocating them to programs
Scheduling Deciding when, and for how long to devote the CPU
to a program
Protection Protect data and programs against interference from
other users and their programs
8
Chapter 2
Classification of Operating
Systems
9
Multi-
programming
Time Distributed
Efficiency →
Sharing OS
Real
Time OS
Batch
Processing
User Convenience →
Response Time The response time provided to a subrequest is the time be-
tween the submission of the subrequest by the user and the formulation
of the process response to it.
Turn Around Time The turn around time of a job, program or process
is the time since its submission for processing to the time its results
become available to the user.
10
Batch execution Result Printing
t0 t1 t2 t3 t4 t5 t6
Job is Batch is
submitted formed
Results are
returned to user
Time slice The notion of a time slice is used to prevent monopolization of the
CPU by a program. The time slice is the largest amount of CPU time any
program can consume when scheduled to execute on the CPU
11
Swapping The technique of swapping provides an alternative whereby a com-
puter system can support a large number of users without having to possess
a large memory.
Swapping is the technique of temporarily removing inactive programs from
the memory of a computer system.
Examples of real time OS are : Harmony, Maruti, OS-9 and RTEMs etc.
12
Chapter 3
Computer System
Architecture
3.1 Introduction
A computer system may be organized in a number of different ways, which we
can categorize roughly according to the number of general purpose processors
used–
Single-processor systems There is one main CPU capable of executing a
general purpose instruction set, including instructions for user processes.
Almost all systems have other special-purpose processors as well. They
may come in the form of device-specific processors, such as disk, keyboard
and graphics controllers or on mainframes, they may come in the form of
I/O processors.
Multiprocessor Systems Graceful degradation and fault tolerant — the abil-
ity to continue providing service proportional to the level of surviving
hardware is called graceful degradation. Some systems go beyond grace-
ful degradation and are called fault tolerant, because they can suffer a
failure of any single component and still continue operation.
13
4. Graceful degradation, and
5. Fault tolerance
14
Chapter 4
4.1 Introduction
An OS provides an environment for the execution of programs. It provides certain
services to programs and to the users of these programs. These OS services are
provided for the convenience of the programmers to make the programming task
easier.
One set of Operating System services provides functions that are helpful to
the users
• User Interface
• I/O operations
• Communications
• Program execution
• File system manipulation
• Error detection
Another set of Operating System functions exist not for helping the user but
rather for ensuring the efficient operation of the system itself.
• Resource Allocation
• Accounting
• Protection and security
15
ii) Graphical User Interface GUI allows the user to interface with the oper-
ating system via a graphical user interface.
• Process Control
– End, abort
– Load, execute
– Create process
– terminate process
– Get process attributes
– set process attributes
– wait for time
– wait event, signal event
– allocate and free memory
• File management
– Create file, delete file
– open, close
– read, write, reposition
– get file attributes, set file attributes
• Device Management
– Request device, release device
– Read, write, reposition
– get device attributes, set device attributes
– Logically attach or detach device
• Information Maintenance
16
User application
open()
user mode
System Call Interface
kernel mode
open()
..
. Implementation
of open() system call
..
.. .
. ..
.
return
Figure 4.1: Relationship between an API, system call interface, and the operating
system. It illustrates how the OS handles a user application invoking the open()
system call
• Communications
– create, delete communication connection
– send, receive messages
– transfer status information
– attach or detach remote devices
17
• I/O system management
• File Management
• Protection system
• Networking
• Command-interpreter system
4.5 Booting
1. Determine the configuration of the system
2. Load OS Programs constituting the kernel in memory
4.6 Kernel
The kernal provides basic services for all other parts of the operating system,
typically including memory management,process management,file man-
agement and I/O management (i.e. accessing the peripheral devices).
These services are requested by other parts of Operating System or by
application programs through a specified set of program interfaces referred
to as system calls.
The kernel performs its tasks, such as executing processes and handling
interrupts in the kernal space, whereas everything a user normally does, such
as writing text in a text editor or running programs in a GUI, is done in
user space.
18
Chapter 5
Non-resident
System Resident Area part of OS
area Transient Area
User
area Swapped-out
program
User User
interface program
OS Layer
Bare Machine
19
5.2 Layered Approach
In the layered approach, the operating system is broken up into a number of
layers. The bottom layer (layer 0) is the hardware, and the highest layer (layer
N) is the user interface. The main advantage of this approach is simplicity of
construction and debugging. The layers are selected so that each uses functions
(operations) and services of only lower-level layers.
Layer N
User Interface
......
Layer 1
Layer 0
Hardware
5.3 Microkernels
This method structures the Operating System by removing all non-essential
components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel. Typically, however microkernels provide
minimal process and memory management, in addition to a communication
facility.
20
Part II
Process
21
Chapter 6
Introduction to Process
6.1 Process
A process is a program in execution. A process is more than the program code,
which is sometimes known as the text section.
max
stack
heap
data
text
22
New Terminated
admitted Exit
Interrupt
Ready Running
scheduler
I/O or dispatch
I/O or Event wait
Event completion
Waiting
Waiting The process is waiting for some event to occur (such as an I/O com-
pletion)
• Program counter
• CPU registers
• CPU scheduling info
23
Process state
Process number
Program counter
Registers
Memory limits
...
6.4.2 Schedulers
A process migrates among the various scheduling queues throughout its lifetime.
The Operating System must select, for scheduler purposes, processes from these
queues in some fashion. The selection process is carried out by the appropriate
scheduler.
24
Long-term Scheduler/Job Scheduler
The long term scheduler or job scheduler selects processes from the job pool
(typically on a disk) and loads them into memory for execution.
25
Chapter 7
Interprocess
Communication
• Computation speedup
• Modularity
• Convenience
Cooperating processes require an IPC mechanism that will allow them to
exchange data and information.
There are two fundamental models of Inter-process Communication (IPC):
1. Shared Memory Systems
2. Message Passing System
26
M 1
Process A Process A
shared
M
Process B
2
Process B
1
2
Kernel Kernel
M
27
Advantages of Mailboxes
i) Anonymity of Receiver A process sending a message to a mailbox need
not know the identity of the receiver process. If an OS permits the receiver
of a mailbox to be changed dynamically, a process can take over the
functionality of another process.
ii) Classification of Messages A process may create several mailboxes, and
use each mailbox to receive messages of a specific kind. This arrangement
permits easy classification of messages.
iii) Unbounded Capacity The queue’s length is potentially infinite, thus any
number of messages can wait in it; the sender never blocks.
28
Chapter 8
Threads
8.1 Introduction
Use of processes to provide concurrency within an application incurs high process
switching overhead. Threads provide a low cost method of implementing
concurrency that is suitable for certain kinds of applications.
Process switching overhead has two components:
• Execution related overhead
• Resource use, related overhead
29
8.3 Multi-threading Models
Support for threads may be provided either at the user level, for user threads, or
by the kernel, for kernel threads. User threads are supported above the kernel
and are managed without kernel support, whereas kernel threads are supported
and managed directly by the operating system.
Ultimately, there must exist a relationship between user threads and kernel
threads. There are three common ways of establishing this relationship.
user thread
K kernel thread
30
user thread
K K K K kernel thread
user thread
K K K kernel thread
31
Chapter 9
Process Scheduling
9.2 Dispatcher
Another component involved in the CPU scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves the following:
• Switching context
32
• Switching to the user mode
• Jumping to the proper location in the user program to restart that program
The time it takes for the dispatcher to stop one process and start another
running is known as the dispatch latency.
If the processes arrive in the order P1 , P2 , P3 and are served in FCFS order,
we get the Gantt chart as shown in Figure 9.1 .
P1 P2 P3
0 24 27 30
(0+24+27)
Avg waiting time = 3 = 17milliseconds
33
P2 P3 P1
0 3 6 30
If the processes arrive in the order P2 , P3 , P1 , then we get the Gantt chart
in Figure 9.2.
(6+0+3)
Avg waiting time = 3 = 3milliseconds
This reduction is substantial. Thus, the average waiting time under an FCFS
policy is generally not minimal and may vary substantially if the processes’ is
CPU burst times vary greatly. The FCFS scheduling algorithm is non-preemptive.
P4 P1 P3 P2
0 3 9 16 24
(3+16+9+0)
Avg waiting time = 4 = 7milliseconds
34
paid for computer use, the department sponsoring the work, and other political
factors.
Priority scheduling can be either pre-emptive or non-preemptive.
A major problem with priority scheduling algorithms is indefinite blocking
or starvation. A priority scheduling algorithm can leave some low priority
processes waiting indefinitely. A solution to the problem of indefinite blockage
of low-priority processes is aging. Aging is a technique of gradually increasing
the priority of processes that wait in the system for a long time.
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
17
Avg waiting time = 3 = 5.66 milliseconds
35
quantum is too large, the RR scheduling degenerates to FCFS policy. A rule
of thumb is that 30% of the CPU bursts should be shorter than the
time quantum.
quantum=8
quantum=16
FCFS
36
Chapter 10
Process Synchronization
37
Producers Consumers
Buffer Pool
38
Algorithm 1 Solution outline for a single buffer Producers-Consumers system
using signalling
[ht]
var
buffer : . . . ;
buffer full: boolean;
producer blocked, consumer blocked: boolean;
Begin
buffer full := false;
producer blocked := false;
consumer blocked := false;
Producer Consumer
Parbegin Parbegin
repeat repeat
check b empty; check b full;
{Produce in the buffer} {Consume from the buffer}
post b full; post b empty;
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd
End
10.3.2 Semaphores
A semaphore is a shared integer variable with non-negative values that
can be subjected only to the following operations:
39
Algorithm 2 Individual Operations for the Producers-Consumers problem
Binary Semaphores
A binary semaphore is a special form of a semaphore used for implementing
mutual exclusion. Hence it is often called a mutex. A binary semaphore is
initialized to 1 and takes only the values 0 and 1 during execution of a program.
Bounded Concurrency
Algorithm 8 illustrates how a set of concurrent processes share five printers.
40
P
Rice P
41
Algorithm 3 An outline of a Dining Philosopher process
1: repeat
2: successful := false
3: while not successful do
If both forks are available then lift the forks one at a time
4: successful := true
5: if successful=false then
6: block(Pi );
7: end if
8: {eat}
42
Algorithm 4 Dekker’s Algorithm
var
turn : 1. . . 2;
c1,c2 : 0. . . 1;
Begin
c1 := 1;
c2 := 1;
turn := 1;
Process P1 Process P2
Parbegin Parbegin
repeat repeat
c1 := 0; c2 := 0;
while c2 = 0 do while c1 = 0 do
if turn = 2 then if turn = 1 then
c1 := 1; c2 := 1;
while turn=2 do while turn=1 do
{nothing}; {nothing};
end while end while
c1 := 0; c2 := 0;
end if end if
end while end while
{critical section} {critical section}
turn := 2; turn := 1;
c1 := 1; c2 := 1;
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd
End
43
Algorithm 5 Peterson’s Algorithm
var
flag : array[0. . . 1] of boolean;
turn : 0. . . 1;
Begin
flag[0] := false;
flag[1] := false;
Process P0 Process P1
Parbegin Parbegin
repeat repeat
flag[0] := true; flag[1] := true;
turn := 1; turn := 0;
while flag[1] & turn=1 do while flag[0] & turn=0 do
{nothing}; {nothing};
end while end while
{critical section} {critical section}
flag[0] := false; flag[1] := false;
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd
End
44
Algorithm 7 Mutual Exclusion
Begin
var
sem cs : semaphore := 1;
Process Pi Process Pj
Parbegin Parbegin
repeat repeat
wait(sem cs); wait(sem cs);
{critical section} {critical section}
signal(sem cs); signal(sem cs);
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd
End
Process P1 . . . . . . Process Pn
Parbegin Parbegin
repeat repeat
wait(printers); wait(printers);
{use a printer} {use a printer}
signal(printers); signal(printers);
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd
End
45
Algorithm 9 Signalling using semaphores
Begin
var
sync : semaphore := 0;
Parbegin
Process Pi Process Pj
... ...
wait(sync); Perform action aj
Perform action ai signal(sync);
ParEnd
End
Begin
Producer Consumer
Parbegin Parbegin
repeat repeat
wait(empty); wait(full);
buffer[0] := . . . ; . i.e x := buffer[0]; . i.e
produce consume
signal(full); signal(empty);
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd
End
46
Chapter 11
Deadlocks
A deadlock is a situation in which some processes wait for each other’s actions
indefinitely.
2. Use
3. Release
4. Circular Wait
47
R1 R3
P1 P2 P3
R2 R4
1. P1 → R1 → P2 → R3 → P3 → R2 → P1
2. P2 → R3 → P3 → R2 → P2
R1 R3
P1 P2 P3
R2 R4
48
R1
P2
P1 P3
R2 P4
Banker’s Algorithm
Banker’s Algorithm uses two tests – a feasibility test and a safety test when a
process makes a request.
49
R1
P1 P2
R2
a) Initial state
R R1 R2 R3 R4
1 R2 R3 R4
P1 2 1 2 1 1 1 1 1 0 0 0 0 5 3 5 4
P2 2 4 3 2 2 0 1 0 0 1 1 0 Total allotted
P3 5 4 2 2
2 0 2
2
0 0 0
0 R1 R2 R3 R4
P4 0 3 4 1
0 2 1 1 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P1 , P2 , P3 , P4 }
R R1 R2 R3 R4
1 R2 R3 R4
P1 2 1 2 1 − − − − 0 0 0 0 4 3 5 3
P2 2 4 3 2 2 1 2 0 0 1 1 0 Simulated allotted
P3 5 4 2 2
2 0 2 2 0 0 0 0 R1 R2 R3 R4
P4 0 3 4 1
0 2 1 1 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P2 , P3 , P4 }
50
R1
t Re
men qu
n est
sig e
As Edg Ed
ge
P1 Cl P2
aim
Ed
ge
R2
R R1 R2 R3 R4
1 R2 R3 R4
P1 2 1 2 1 − − − − 0 0 0 0 4 1 4 2
P2 2 4 3 2 2 1 2 0 0 1 1 0 Simulated allotted
P3 5 4 2 2
2 0 2
2
0 0 0
0 R1 R2 R3 R4
P4 0 3 4 1
− − − − 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P2 , P3 }
R R1 R2 R3 R4
1 R2 R3 R4
P1 2 1 2 1 − − − − 0 0 0 0 2 0 2 2
P2 2 4 3 2 − − − − 0 1 1 0 Simulated allotted
P3 5 4 2 2
2 0 2
2
0 0 0
0 R1 R2 R3 R4
P4 0 3 4 1
− − − − 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P3 }
51
P5 P5
R1 R3 R4
P1 P2 P3
P1 P2 P3
P4
R2 R5 P4
52
Suppose now that process P2 makes one additional request for an instance of
Type C. The request matrix is modified as follows:
Request
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
We claim that the system is now deadlocked. Although we can reclaim the
resources held by process P0 , the number of available resources is not sufficient
to fulfill the requests of other processes. Thus, deadlock exists consisting of
processes P1 , P2 , P3 & P4 .
1. Process Termination
53