0% found this document useful (0 votes)
21 views

Operatring System Ppt

The document outlines the goals and functions of an operating system, including process, memory, file, I/O, and security management. It describes various types of operating systems such as single-user, multi-user, batch processing, and real-time systems, along with their characteristics. Additionally, it covers the components of an OS, including user space and kernel, types of kernels, and the process management lifecycle.

Uploaded by

tellmemanvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Operatring System Ppt

The document outlines the goals and functions of an operating system, including process, memory, file, I/O, and security management. It describes various types of operating systems such as single-user, multi-user, batch processing, and real-time systems, along with their characteristics. Additionally, it covers the components of an OS, including user space and kernel, types of kernels, and the process management lifecycle.

Uploaded by

tellmemanvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 177

Goals of an Operating System

• Simplify the execution of user programs and make solving user


problems easier.
• Use computer hardware efficiently.
• Allow sharing of hardware and software resources.
• Make application software portable and versatile.
• Provide isolation, security and protection among user programs.
• Improve overall system reliability
Functions/Components of Operating System:

Process management
• A program in its execution state is known as process.
• A process needs certain resources including CPU time, memory, files and
I/O devices to accomplish its task.
• These resources are either given to the process when it is created or
allocated to it while it is running.
• A program is a passive entity such as contents of a file stored on the disk
whereas a process is an active entity
The operating system is responsible for the following activities in
process management:
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling.
2. Memory management
Main memory is a collection of quickly accessible data shared by the CPU
and I/O devices.
The central processor reads instructions from main memory (during
instruction-fetch cycle) and both reads and writes data from main
memory (during data-fetch cycle).
The operating system is responsible for the following activities in memory
management:
• Keeping track of which parts of memory are currently being used and by
whom
• Deciding which processes and data to move into and out of memory
• Allocating and deallocating memory space as needed.
3. File-System Management
The operating system is responsible for the following activities with file
management:
• Creating and deleting files
• Creating and deleting directories to organize files
• Supporting primitives for manipulating files and directories
• Backing up files on stable (nonvolatile) storage media.

4. Secondary storage ManagementThe operating system is responsible for the


following activities with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
5. I/O system Management
The operating system is responsible for the following activities with I/O subsystem:
• A memory-management component that includes buffering, caching, and spooling
• A general device-driver interface
• Drivers for specific hardware devices

6. Protection and Security


• A mechanisms ensure that files, memory segments, CPU, and other resources can be
operated on by only those processes that have been allowed proper authorization from
the operating system.
• For example, memory-addressing hardware ensures that a process can execute only
within its own address space.
• Protection is a mechanism for controlling the access of processes or users to the
resources defined by a computer system.
7. Networking
• A distributed system is a collection of physically separated computer systems that are networked
to provide the users with access to the various resources that the system maintains.
• Access to a shared resource increases computation speed, functionality, data availability, and
reliability.
• Network Operating System (NOS) provides remote access to the users.
• It also provides the sharing of h/w and s/w resources from remote machine to own systems.

8.Command Interpreter
• To interface with the operating System we use command-line interface or command interpreter
that allows users to directly enter commands that are to be performed by the operating system.
• The main function of the command interpreter is to get and execute the user-specified
commands. Many of the commands given at this level manipulate files: create, delete, list, print,
copy, execute, and so on. Eg: MS-DOS and UNIX shells.
Types of Operating System
• Single user Operating system This OS provides the environment for
single user i.e. only one user can interact with the system at a time.
Eg: MS-DOS, MS WINDOWS-XP, ME, 2000 etc.

• Multi user Operating System This OS provides the environment for


multiple users i.e. many user can interact with the system at a time.
These users are remotely connected to a system taking benefits of
shared resources of master system through networking. Eg: UNIX,
LINUX.
Serial Processing Operating System

• Early computer from late 1940 to the mid 1950.


• The programmer interacted directly with the computer hardware.
• These machine are called bare machine as they don't have OS.
• Every computer system is programmed in its machine language.
• Uses Punch Card, paper tapes and language translator

• In a typical sequence first the editor is been called to create a source code of user
program then translator is been called to convert source code into its object code, finally
the loader is been called to load its executable program into main memory for
execution.

• If syntax errors are detected than the whole program must be restarted from the
beginning.
Batch processing Operating System

• Batch processing is executing a series of non –interactive jobs all at one


time.
• Usually,jobs with similar request are grouped together during working
hours and then executed during the evening or whenever the computer
is idle.

• Batching similar jobs brought higher utilization of system resources


Interactive Operating system

•Real-Time Interaction: Designed to allow users to interact with the


system instantly and continuously.

•Immediate Feedback: Provides immediate responses to user inputs,


enhancing the system’s responsiveness.

•Dynamic Experience: Enables a more dynamic and interactive


computing experience than batch processing systems.

•Contrast with Batch Processing: Unlike batch processing systems,


which handle jobs in batches without real-time user interaction, interactive
operating systems support ongoing user engagement.
Multiprogramming
• Multiprogramming is a technique to execute number of programs
simultaneously by a single processor.
• In Multiprogramming, number of processes reside in main memory at a
time.
• The OS picks and begins to executes one of the jobs in the main memory.
• If any I/O wait happened in a process, then CPU switches from that job
to another job.
• Hence CPU in not idle at any time.
• Figure dipicts the layout of
OS multiprogramming system.
• The main memory consists of 5 jobs at a
Job 1 time, the CPU executes one by one.

Job 2 • Advantages:
• Efficient memory utilization
Job 3 • Throughput increases. (Throughput is a measure of how
many units of information a system can process in a given amount of time.)

Job 4 • CPU is never idle, so performance


increases.
Job 5
Multiprocessor system :-This system has more than one processor which share
common bus, clock, peripheral devices and sometimes memory.
These systems have following advantages:
• Increased throughput By increasing the number of processor, we get more
work done in less time.
• Economy of scale Multiprocessor system are cost effective as several
processors share same resources of the system(memory, peripherals etc.)
• Increased reliability each processor is been fairly allotted with different job,
failure of one processor will not halt the system, only it will slower down the
performance. for example if we have ten processor and one fails, then each of
the remaining nine processor share the work of failed processor. Thus the
system will be 10% slower rather than failing altogether
This system can be categorized into:

• i) SMP (Symmetric multiprocessing)In this system each processor runs an


identical copy of the task and however these processor communicate with
each other whenever needed. E.g. Windows NT, Solaris, Unix/Linux.

• ii) ASMP(Asymmetric multiprocessing) In asymmetric multiprocessing


each processor runs a specific task. As there is a master processor which
controls the whole system,and other processor looks to the master for
instructions. E.g. Sun OS ver. 4
Time sharing System(Multitasking)

• Time sharing, or multitasking, is a logical extension of multiprogramming.


• Multiple jobs are executed by switching the CPU between them.
• In this, the CPU time is shared by different processes, so it is called as
“Time sharing Systems”.
• A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared computer.
• Time slice is defined by the OS, for sharing CPU time between processes.
• Examples: Multics, Unix, etc.,
Real time Operating System

• A real time system has well defined fixed time constraints, processing
must be done within defined constraints or system will get failed.
• System that controls scientific system, experimenting medical system,
industrial control system and certain display systems are real time
system.
• They are also applicable to automobile engine fuel system, home
appliance controller and weapon systems.
• There are two types of real system:
• i) Hard real time system This system guarantees that critical tasks be
completed on time. For this all the delays in the system should be
bounded, from the retrieval of stored data to the time it takes
operating system to finish any request made to it.

• ii) Soft real time system This is less restrictive type of system defined
as not hard real-time, simply providing that a critical real-time task
will receive priority over other tasks and that it will retain the priority
until it completes.
Operating-System Services
1) User interface Almost all operating systems have a user interface
(UI).This interface can take several forms.
• command-line interface (CLI) which uses text commands.
• batch interface, in which commands and directives to control those
commands are entered into files
• graphical user interface (GUI) the interface is a window system with a
pointing device to direct I/O, choose from menus, and make
selections and keyboard to enter text.
2) Program execution:- The system should to load a program into memory and to run
that program. The program must be able to end its execution.

3) I/O operations:- A running program may require I/O operation(such as recording to a


CD or DVD drive or blanking a CRT screen). For efficiency and protection, users usually
cannot control I/O devices directly. Therefore the operating system must provide a means
to do I/O.

4)File-system manipulation:- Programs need to read and write files. They also need to
create and delete. Finally, some programs include permissions management to allow or
deny access to files or directories based on file ownership.
5) Communications. There are many circumstances in which one process needs
to exchange information with another process.
Such communication may occur between processes that are executing on the
same computer or between processes that are executing on different
computer systems tied together by a computer network.

6)Error detection. Errors may occur in the CPU and memory hardware (such as a
memory error or a power failure), in I/O devices (a network failure, or lack of
paper in the printer), and in the user program (such as an arithmetic overflow,
an attempt to access an illegal memory location, or too-great use of CPU time).
For each type of error, the operating system should take the appropriate
action to ensure correct and consistent computing.
7. Resource allocation. When there are multiple users or multiple jobs running at
the same time, resources must be allocated to each of them. Many different
types of resources are managed by the operating system trough various methods
such as CPU-scheduling.

8. Accounting. OS keeps track of which users use how much and what kind of
computer resources. This record keeping may be used for accounting

9. Protection and security. When several separate processes execute concurrently,


it should not be possible for one process to interfere with the others or with the
operating system itself.
Protection involves ensuring that all access to system resources is controlled.
Security of the system from outsiders is also important by means of a password,
to gain access to system resources.
Components of OS

1. User Space
2. Kernel
User Space
• No hardware access
• Convenient environment user apps
Kernel
• Heart of OS
• Interacts with hardware
• Very first part of OS to load on start-up.
Functions of Kernel/OS
• Process management
• Memory management
• File management
• I/O management
Process management

• Scheduling processes and threads on the CPUs.


• Creating & deleting both user and system process.
• Suspending and resuming processes
• Providing mechanisms for process synchronization or process
communication.
Memory management

• Allocating and deallocating memory space as per need.


• Keeping track of which part of memory are currently being used and
by which process.
File management

• Creating and deleting files.


• Creating and deleting directories to organize files.
• Mapping files into secondary storage.
• Backup support onto a stable storage media.
I/O management

• to manage and control I/O operations and I/O devices


• Buffering (data copy between two devices), caching and spooling.
• Spooling: the act of sending data from one location to another, bit at
a time, where the other end is collecting it to do something with it.
• Buffering: waiting for enough of that data to arrive to do something
with it.
• Caching: saving what’s been received to use again later, to save having
to do it all over again!
Types of Kernels:
• Monolithic kernel
• Micro Kernel
• Hybrid Kernel
Monolithic Kernel
Monolithic Kernel
• All functions are in kernel itself.
• Bulky in size.
• Memory required to run is high.
• Less reliable, one module crashes -> whole kernel is down.
• High performance as communication is fast. (Less user mode, kernel
mode overheads)
• Eg. Linux, Unix, MS-DOS.
Micro Kernel
Micro Kernel

• Only major functions are in kernel.


i. Memory mgmt.
ii. Process mgmt.
• File mgmt. and IO mgmt. are in User-space.
• smaller in size.
• More Reliable
• More stable
• Performance is slow.
• Overhead switching b/w user mode and kernel mode.
• Eg. L4 Linux, Symbian OS, MINIX etc.
Hybrid Kernel
Hybrid Kernel

• Advantages of both worlds. (File mgmt. in User space and rest in


Kernel space. )
• Combined approach.
• Speed and design of mono.
• Modularity and stability of micro.
• Eg. MacOS, Windows NT/7/10
• IPC also happens but lesser overheads
Interrupts

• An interrupt is a signal from a device attached to a computer or from


a program within the computer that causes the main program that
operates the computer to stop and figure out what to do next.
• The occurrence of an event is usually signaled by an interrupt from
either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually through
system bus. Software may trigger an interrupt by executing a special
operation called a system call or monitor call
Interrupts processing
• The basic interrupt mechanism works as follows.
• The CPU hardware has a wire called the interrupt-request line that the CPU senses
after executing every instruction.
• When the CPU detects that a controller has asserted a signal on the interrupt
request line, the CPU performs a state save and jumps to the interrupthandler
routine at a fixed address in memory.
• The interrupt handler determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt
instruction to return the CPU to the execution state prior to the interrupt.
• We say that the device controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and dispatches it to the
interrupt handler, and the handler clears the interrupt by servicing the device.
Process state transition diagram
Process Management
• Operating systems provide fundamental services to processes
including:
– Creating processes
– Destroying processes
– Suspending processes
– Resuming processes
– Changing a process’s priority
– Blocking processes
– Waking up processes
– Dispatching processes
– Interprocess communication (IPC)
Process States and State Transitions
• Process states
– The act of assigning a processor to the first process on the ready list is called
dispatching
– The OS may use an interval timer to allow a process to run for a specific time
interval or quantum
– Cooperative multitasking lets each process run to completion
• State Transitions
– At this point, there are four possible state transitions
• When a process is dispatched, it transitions from ready to running
• When the quantum expires, it transitions from running to ready
• When a process blocks, it transitions from running to blocked
• When the event occurs, it transitions from blocked to ready
Process Control Blocks (PCBs)

• Process Control Block


• Each process is represented in the operating system by a process control block (PCB) also called a
task control block. A PCB contains many pieces of information associated with a specific process,
including these:
• • Process state. The state may be new, ready, running, waiting, halted, and so on.
• • Program counter. The counter indicates the address of the next instruction to be executed for
this process.
• • CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information. Along with the program counter, this state information must
• be saved when an interrupt occurs, to allow the process to be continued correctly afterward.
Block diagram of PCB
• • CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• • Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• • Accounting information. This information includes the amount of CPU and real
time used, time limits, account members, job or process numbers, and so on.
• • I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
• In brief, the PCB simply serves as the repository for any information that may vary
from process to process.

• i) Long term Scheduler In batch system, more processes are submitted
than can be executed immediately. These processes are spooled to a mass-
storage device (typically a disk), where they are kept for later execution.
The long-term scheduler, or job scheduler, selects processes from this pool
and loads them into memory for execution. The long-term scheduler
executes much less frequently; minutes may separate the creation of one
new process and the next. The long-term scheduler controls the degree of
multiprogramming(the number of processes in memory). It is important
that the long-term scheduler make a careful selection.
• ii) Short term Scheduler The short-term scheduler, or CPU scheduler,
selects from among the processes that are ready to execute and allocates
the CPU to one of them. Because of the short time between executions,
the short-term scheduler must be fast.
• medium-term scheduler The key idea behind a medium-term
scheduler is that sometimes it can be advantageous to remove
processes from memory for I/O operation and thus reduce the degree
of multiprogramming. Later, the process can be reintroduced into
memory, and its execution can be continued where it left off. This
scheme is called swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler. Swapping may be
necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring
memory to be freed up.
Scheduling Algorithms
• A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms.

• First-Come, First-Served (FCFS) Scheduling


• Shortest-Job-Next (SJN) Scheduling / Shortest request next
• Highest Response Ratio Next
• Round Robin(RR) Scheduling
• Least Complete Next
• Shortest time to go
• Priority Scheduling
• These algorithms are either non-preemptive or preemptive.
• Non-preemptive algorithms are designed so that once a process enters
the running state, it cannot be preempted until it completes its allotted
time
• Preemptive scheduling is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority
process enters into a ready state. , where a scheduler may preempt a
low-priority running process anytime when a high-priority
• Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
FCFS Non Preemptive Example
Average Waiting Time

Waiting Time = Starting Time - Arrival Time


Waiting time of
P1 = 0
P2 = 5 - 0 = 5 ms
P3 = 29 - 0 = 29 ms
P4 = 45 - 0 = 45 ms
P5 = 55 - 0 = 55 ms
Average Waiting Time = Waiting Time of all Processes / Total Number of
Process
Therefore, average waiting time = (0 + 5 + 29 + 45 + 55) / 5 = 26.8 ms
Average Turnaround Time
• Turnaround Time = Waiting time in the ready queue + executing time +
waiting time in waiting-queue for I/O
Turnaround time of
P1 = 0 + 5 + 0 = 5ms
P2 = 5 + 24 + 0 = 29ms
P3 = 29 + 16 + 0 = 45ms
P4 = 45 + 10 + 0 = 55ms
P5 = 55 + 3 + 0 = 58ms
Total Turnaround Time = (5 + 29 + 45 + 55 + 58)ms = 192ms
Average Turnaround Time = (Total Turnaround Time / Total Number of
Process) = (192 / 5)ms = 38.4ms
Example of Non Preemptive SJF
• Average Waiting Time : arrival time is common to all processes(i.e.,
zero).
Waiting Time for
P1 = 3 - 0 = 3ms
P2 = 34 - 0 = 34ms
P3 = 18 - 0 = 18ms
P4 = 8 - 0 = 8ms
P5 = 0ms
Now, Average Waiting Time = (3 + 34 + 18 + 8 + 0) / 5 = 12.6ms
• Average Turnaround Time
• According to the SJF Gantt chart and the turnaround time formulae,
Turnaround Time of
P1 = 3 + 5 = 8ms
P2 = 34 + 24 = 58ms
P3 = 18 + 16 = 34ms
P4 = 8 + 10 = 18ms
P5 = 0 + 3 = 3ms
Therefore, Average Turnaround Time = (8 + 58 + 34 + 18 + 3) / 5 =
24.2ms
Example of Preemptive SJF/SRTF
• Average Waiting Time
• First of all, we have to find the waiting time for each process.
Waiting Time of process
P1 = 0ms
P2 = (3 - 2) + (10 - 4) = 7ms
P3 = (4 - 4) = 0ms
P4 = (15 - 6) = 9ms
P5 = (8 - 8) = 0ms
Therefore, Average Waiting Time = (0 + 7 + 0 + 9 + 0) / 5 = 3.2ms
• Average Turnaround Time
• First of all, we have to find the turnaround time of each process.
Turnaround Time of process
P1 = (0 + 3) = 3ms
P2 = (7 + 6) = 13ms
P3 = (0 + 4) = 4ms
P4 = (9 + 5) = 14ms
P5 = (0 + 2) = 2ms
Therefore, Average Turnaround Time = (3 + 13 + 4 + 14 + 2) / 5 =
7.2ms
Priority Scheduling Example
• Average Waiting Time
• First of all, we have to find out the waiting time of each process.
Waiting Time of process
P1 = 3ms
P2 = 13ms
P3 = 25ms
P4 = 0ms
P5 = 9ms
Therefore, Average Waiting Time = (3 + 13 + 25 + 0 + 9) / 5 = 10ms
• Average Turnaround Time
• First finding Turnaround Time of each process.
Turnaround Time of process
P1 = (3 + 6) = 9ms
P2 = (13 + 12) = 25ms
P3 = (25 + 1) = 26ms
P4 = (0 + 3) = 3ms
P5 = (9 + 4) = 13ms
Therefore, Average Turnaround Time = (9 + 25 + 26 + 3 + 13) / 5 =
15.2ms
Round Robin (RR) Example
Time quantum is 5
• Average Waiting Time
• For finding Average Waiting Time, we have to find out the waiting
time of each process.

Waiting Time of
P1 = 0 + (15 - 5) + (24 - 20) = 14ms
P2 = 5 + (20 - 10) = 15ms
P3 = 10 + (21 - 15) = 16ms
Therefore, Average Waiting Time = (14 + 15 + 16) / 3 = 15ms
• Average Turnaround Time

• Same concept for finding the Turnaround Time.


Turnaround Time of

P1 = 14 + 30 = 44ms
P2 = 15 + 6 = 21ms
P3 = 16 + 8 = 24ms

Therefore, Average Turnaround Time = (44 + 21 + 24) / 3 = 29.66ms
Process Synchronization
ØIt means sharing system resources by processes in a such a way that,
concurrent access to shared data is handled and thus allow minimum
chance of inconsistency (inconsistent data).

Ø Maintaining data consistency demands mechanisms by ensuring


synchronized execution among cooperating/sharing/concurrently executing
processes.

ØProcess Synchronization was introduced to handle problems that used to


get arose while multiple process executes concurrently.

ØSome of the problems are discussed below.


Mutex Locks(mutual exclusion object)
Ø A strict software approach called Mutex Locks was introduced. In this approach, in
the entry section of code, a LOCK is acquired over the critical resources modified and
used inside critical section, and in the exit section that LOCK is released.

ØAs the resource is locked while a process executes its critical section hence no other
process can access it. The acquire() function acquires the lock and the release()
function releases the lock.
Synchronization Hardware
ØThe critical section problem could be solved easily in a single-processor environment
if we could disallow interrupts to occur while a shared variable or resource is being
modified.

ØWe could be sure that the current sequence of instructions would be allowed to
execute in order without pre-emption. Unfortunately, this solution is not feasible in a
multiprocessor environment.

ØDisabling interrupt on a multiprocessor environment can be time consuming as the


message is passed to all the processors.

ØThis message transmission lag, delays entry of threads into critical section and the
system efficiency decreases.
acquire() {
while(!available)
; /* busy wait */
available=false;;
}

release() {
available=true;
}

Definition of acquire and release function


Wait()
ØIt also called P (“Proberen” meaning to test)

Ø is called when a process wants access to a resource.

Ø This would be equivalent to the arriving customer trying to get an open


table.

ØIf there is an open table, or the semaphore is greater than zero, then he can
take that resource and sit at the table.
Ødecrement the value of its argument S as soon as it would become non-
negative.
Signal()
ØIt also called V (for Dutch “Verhogen” meaning to increment)

Ø is called when a process is done using a resource, or when the guest is


finished with his meal.

ØThe following is an implementation of this counting semaphore (where


the value can be greater than 1)

Øincrement the value of its argument, S as an individual operation.


• The readers-writers problem relates to an object such as a file that is shared
between multiple processes.
• Some of these processes are readers i.e. they only want to read the data
from the object and some of the processes are writers i.e. they want to write
into the object.
• The readers-writers problem is used to manage synchronization so that
there are no problems with the object data.
• For example - If two readers access the object at the same time there is no
problem.
• However if two writers or a reader and writer access the object at the same
time, there may be problems.
• To solve this situation, a writer should get exclusive access to an
object i.e. when a writer is accessing the object, no reader or writer
may access it.
• However, multiple readers can access the object at the same time.
• This can be implemented using semaphores.
• The codes for the reader and writer process in the reader-writer
problem are given as follows −
The Dining Philosophers problems is a classic
synchronization problem (E. W. Dijkstra),
introducing semaphores as a conceptual synchronization
mechanism.
Statement:
ØThere is a dining room containing a circular table with five chairs.

Ø At each chair is a plate, and between each plate is a single chopstick. In


the middle of the table is a bowl of spaghetti.

ØNear the room are five philosophers who spend most of their time
thinking,

Ø A philosopher needs both their right and left chopstick to eat.


ØA hungry philosopher may only eat if there are both chopsticks available.

ØOtherwise a philosopher puts down their chopstick and begin thinking


again.

ØA solution of the Dining Philosophers Problem is to use a semaphore to


represent a chopstick.

ØA Chopstick can be picked up by executing a wait operation on the


semaphore and released by executing a signal semaphore.
Thus, each philosopher is represented by the following pseudo code:

process P[i]
while true do
{
THINK;

WAIT(CHOPSTICK[i]);
WAIT(CHOPSTICK[(i+1 )mod 5]); //pickup

EAT;
SIGNAL(CHOPSTICK[i]);
SIGNAL(CHOPSTICK[(i+1) mod 5]); //putdown
}
ØA philosopher may THINK indefinitely.

ØEvery philosopher who EATS will eventually finish.

ØPhilosophers may PICKUP and PUTDOWN their chopsticks in either order,


or non deterministically, but these are atomic actions

Øtwo philosophers cannot use a single CHOPSTICK at the same time.


Difficulty with the solution

ØIf all the philosophers pick their left chopstick simultaneously. Then none of them can
eat and deadlock occurs.

Some of the ways to avoid deadlock are as follows:

ØThere should be at most four philosophers on the table.

ØAn even philosopher should pick the right chopstick and then the left chopstick while an
odd philosopher should pick the left chopstick and then the right chopstick.

ØA philosopher should only be allowed to pick their chopstick if both are available at the
same time.
qA set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
q A process requests resources; and if the resources are not available at that
time, the process enters a waiting state.

q Sometimes, a waiting process is never again able to change state, because the
resources it has requested are held by other waiting processes.

q This situation is called a deadlock.


Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
■ Mutual exclusion:only one process at a time can use a resource.

■ Hold and wait: a process holding at least one resource is waiting to


acquire additional resources held by other processes.
■No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.

■Circular wait: there exists a set {P0, P1, …, P0} of waiting


processes such that P0 is waiting for a resource that is held by P1, P1
is waiting for a resource that is held by P2, …, Pn–1 is waiting for a
resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
• Process

• Resource Type with 4 instances

• Pi requests instance of Rj Pi

Pi
• Pi is holding an instance of Rj
Rj
Example of a Resource Allocation Graph
Will there be a deadlock here?
Resource Allocation Graph With A Cycle But
No Deadlock
Basic Facts
■ If graph contains no cycles means no deadlock.

■ If graph contains a cycle means


● if only one instance per resource type, then deadlock.
● if several instances per resource type, possibility of deadlock
Methods for Handling Deadlocks
■ Ensure that the system will never enter a deadlock state.

■ Allow the system to enter a deadlock state and then recover.

■ Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX.
The Methods (continued

■ Deadlock Prevention - Do not let the deadlock occur.

■ Deadlock Avoidance -Avoid the Deadlock

■ Deadlock Detection -Let the deadlock occur in the system and then
attempt to recover the system from deadlock.
Deadlock prevention

For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of the
deadlock.

■Mutual Exclusion – not required for sharable resources; must hold for non-sharable
resources.

■Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources.
● Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has
none.
● Low resource utilization; starvation possible
Deadlock Prevention (Cont.)
■ No Preemption –
● If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
● Preempted resources are added to the list of resources for which the process
is waiting.
● Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.

■ Circular Wait – impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori information available.

■ Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.

■ The deadlock-avoidance algorithm dynamically examines the resource-allocation state


to ensure that there can never be a circular-wait condition.

■ Resource-allocation state is defined by the number of available and allocated


resources, and the maximum demands of the processes.
Deadlock Detection
■ Allow system to enter deadlock state

■ Detection algorithm

■ Recovery scheme
Safe State
§A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. More formally, a system is
in a safe state only if there exists a safe sequence.

§A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current
allocation state if, for each Pi, the resource requests that Pi can still make can
be satisfied by the currently available resources plus the resources held by all
Pj, with j < i.
Safe, Unsafe , Deadlock State
•If a system is in safe state, there is no deadlock.

•If the system is deadlocked, it is in an unsafe state.

•If a system is in unsafe state, there is a possibility for a deadlock.

•Avoidance: making sure the system will not enter an unsafe state.
Resource-Allocation Graph Algorithm
■ Claim edge Pi - -> Rj indicated that process Pi may request resource Rj;
represented by a dashed line.

■ Claim edge converts to request edge when a process requests a resource.

■ When a resource is released by a process, assignment edge reconverts to a claim


edge.

■ Resources must be claimed a priori in the system.

■ A claim edge denotes that a request may be made in future.


Based on claim edges we can see if there is a chance for a
cycle and then grant requests if the system will again be in a
safe state.
Resource-Allocation Graph For Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
•Suppose that process Pi requests a resource Rj

•The request can be granted only if :

converting the request edge to an assignment edge does not result in the
formation of a cycle in the resource allocation graph

§The resource allocation graph is not much useful if there are multiple
instances for a resource
Example formal algorithms

■ Banker’s Algorithm

■ Resource-Request Algorithm

■ Safety Algorithm
Banker’s Algorithm
■ Multiple instances.

■ Each process must a priori claim maximum use.

■ When a process requests a resource it may have to wait.

■ When a process gets all its resources it must return them in a finite
amount of time.
Data Structures
Let n = number of processes, and m = number of resources types

•Available: vector of length m.


If available [j] = k, there are k instances of resource type Rj available

•Max: n x m matrix.
If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj

•Allocation: n x m matrix.
If Allocation [i,j] = k then Pi is currently allocated k instances of Rj

•Need: n x m matrix.
If Need [i,j] = k, then Pi may need k more instances of Rj to complete its task

Need [i,j] = Max [i,j] – Allocation [i,j]


Safety Algorithm
Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1

2. Find an index i such that:


Finish [i] = false AND Need i £ Work

If no such i exists, go to step 4

3. Work = Work + Allocationi


Finish[i] = true
go to step 2

4.If Finish [i] == true for all i, then the system is in a safe state,
otherwise it is unsafe

The Algorithm requires m x n2 operations to detect whether the system is in deadlocked state
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi.
If Requesti [j] = k then process Pi wants k instances of
resource type Rj.

1.If Requesti £ Needi go to step 2. Otherwise, raise error condition, since


process has exceeded its maximum claim.
2.If Requesti £ Available, go to step 3. Otherwise Pi must wait, since resources
are not available
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:

Available = Available = Requesti;


Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;

● If safe = the resources are allocated to Pi.

● If unsafe = Pi must wait, and the old resource-allocation state is restored


Example of Banker’s Algorithm
■ 5 processes P0 through P4; 3 resource types
■ A (10 instances), B (5instances, and C (7 instances).
■ Snapshot at time T0:

Allocation Max Available

ABC ABC ABC

P0 010 753 332

P1 200 322

P2 302 902

P3 211 222

P4 002 433
Example (Cont.)
■ The content of the matrix. Need is defined to be Max – Allocation.
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431

The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety
criteria
Example P1 Request (1,0,2) (Cont.)

■ Check that Request £ Available (that is, (1,0,2) £ (3,3,2) = true.

Allocation Need Available

ABC ABC ABC

P0 010 743 230

P1 302 020

P2 301 600

P3 211 011

P4 002 431
■ Executing safety algorithm shows that sequence
<P1, P3, P4, P0, P2> satisfies safety requirement.

■ Can request for (3,3,0) by P4 be granted?

■ Can request for (0,2,0) by P0 be granted?


Deadlock Detection

■ Allow system to enter deadlock state

■ Detection algorithm

■ Recovery scheme
Single Instance of Each Resource Type

■ Maintain wait-for graph


● Nodes are processes.
● Pi -> Pj if Pi is waiting for Pj.

■ Periodically invoke an algorithm that searches for a cycle in the graph.

■ An algorithm to detect a cycle in a graph requires an order of n2 operations, where


n is the number of vertices in the graph.
Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph


Several Instances of a Resource Type

■ Available: A vector of length m indicates the number of available resources of


each type.

■ Allocation: An n x m matrix defines the number of resources of each type


currently allocated to each process.

■ Request: An n x m matrix indicates the current request of each process. If Request


[ij] = k, then process Pi is requesting k more instances of resource type. Rj.
Detection Algorithm

1. Let Work and Finish be vectors of length m and n, respectively Initialize:


(a)Work = Available
(b)For i = 1,2, …, n, if Allocationi Not equal to 0, then
Finish[i] = false; otherwise, Finish[i] = true.

2. Find an index i such that both:


(a)Finish[i] == false
(b)Requesti £ Work

If no such i exists, go to step 4.


Detection Algorithm (Cont.)

3. Work = Work + Allocationi


Finish[i] = true
go to step 2.

4. If Finish[i] == false, for some i, 1 £ i £ n, then the system is in deadlock state.


Moreover,
if Finish[i] == false, then Pi is deadlocked.

Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state.
Example of Detection Algorithm

■ Five processes P0 through P4; three resource types


A (7 instances), B (2 instances), and C (6 instances).
■ Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002

Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
Example (Cont.)

■ P2 requests an additional instance of type C.


Request
ABC
P0 000
P1 202
P2 001
P3 100
P4 002

■ State of system?
● Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests.
● Deadlock exists, consisting of processes P1, P2, P3, and P4.
Recovery from Deadlock: Process Termination
■ Abort all deadlocked processes.

■ Abort one process at a time until the deadlock cycle is eliminated.

■ In which order should we choose to abort?


● Priority of the process.
● How long process has computed, and how much longer to completion.
● Resources the process has used.
● Resources process needs to complete.
● How many processes will need to be terminated.
● Is process interactive or batch?
Recovery from Deadlock: Resource Preemption

■ Selecting a victim – minimize cost.

■ Rollback – return to some safe state, restart process for that state.

■ Starvation – same process may always be picked as victim, include


number of rollback in cost factor.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy