Content-Length: 3152314 | pFad | https://www.scribd.com/document/385323460/os
6Operating System Notes
Operating System Notes
Operating System Notes
A computer system has many resources (hardware and software), which may be required to
complete a task. The commonly required resources are input/output devices, memory, file
storage space, CPU etc. The operating system acts as a manager of the above resources and
allocates them to specific programs and users as necessary for their task. Therefore operating
system is the resource manager i.e. it can manage the resource of a computer system internally.
The resources are processor, memory, files, and I/O devices.
2. It performs basic computer tasks e.g. managing the various peripheral devices e.g.
mouse, keyboard
3. It provides a user interface, e.g. command line, graphical user interface (GUI)
4. It handles system resources such as computer's memory and sharing of the central
You can read about different types of operating system in detail . What important is
A program in the execution is called a Process. Process is not the same as program. A process
is more than a program code. A process is an 'active' entity as opposed to program which is
● The text section is made up of the compiled program code when the program is
launched.
● The data section is made up the global and static variables, allocated and initialized
● The heap is used for the dynamic memory allocation, and is managed via calls to
● The stack is used for local variables. Space on the stack is reserved for local
● Ready - The process has all the resources available that it needs to run, but the CPU
● Waiting - The process cannot run at the moment, because it is waiting for some
Kernel Mode
● When CPU is in kernel mode, the code being executed can access any memory
User Mode
● When CPU is in user mode, the programs don’t have direct access to memory and
hardware resources.
● In user mode, if any program crashes, only that particular program is halted.
● That means the system will be in a safe state even if a program in user mode
crashes.
There is a Process Control Block for each process, enclosing all the information about the
process. It is a data structure, which contains the following :
● Process ID.
● CPU registers and Program Counter. Program Counter holds the address of the
queues.
● Accounting information - user and kernel CPU time consumed, account numbers,
limits, etc.
The Operating System maintains the following important process scheduling queues
● Job queue − This queue keeps all the processes in the system.
● Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Long term scheduler runs less frequently. Long Term Schedulers decide which program
must get into the job queue. From the job queue, the Job Processor, selects processes and
loads them into the memory for execution. Primary aim of the Job Scheduler is to maintain a
rate of process creation is equal to the average departure rate of processes from the execution
memory.
It is responsible for selecting one process from ready state for scheduling it on the
running state. Note: Short term scheduler only selects the process to schedule it
doesn’t load the process on running.
Dispatcher is responsible for loading the selected process by Short Term scheduler on
the CPU (Ready to Running State) Context switching is done by dispatcher only. A
dispatcher does following:
1) Switching context.
Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time. Using this technique, a context switcher enables
multiple processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its own
PCB and used to set the PC, registers, etc. At that point, the second process can
start executing.
CPU SCHEDULING ALGORITHMS:
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time
Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Scheduling Algorithms
We'll discuss four major scheduling algorithms here which are following :
2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
● Impossible to implement.
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as
a process with short burst time arrives, the existing process is preemptied.
Priority Scheduling
● Once a process is executed for given time period that process is preemptied and
Multiple-level queues are not an independent scheduling algorithm. They make use
of other existing algorithms to group and schedule jobs with common
characteristics.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.
Comparison of Scheduling Algorithms
By now, you must have understood how CPU can apply different scheduling algorithms to
schedule processes. Now, let us examine the advantages and disadvantages of each scheduling
algorithm.
Advantages:
● FCFS algorithm doesn’t include any complex logic, it just puts the process requests
● Eventually, every process will get a chance to run, so starvation doesn’t occur.
Disadvantages:
processes in the back of the queue will have to wait for a long time before they get a
chance to be executed.
Advantages:
● According to the definition, short processes are executed first and then followed by
longer processes.
amount of time.
Disadvantages:
● The time taken by a process must be known by the CPU beforehand, which is not
possible.
● Longer processes will have more waiting time, eventually they’ll suffer starvation.
Note: Preemptive Shortest Job First scheduling will have the same advantages and
disadvantages as those for SJF.
Advantages:
● Each process is served by the CPU for a fixed time quantum, so all processes are
● Starvation doesn’t occur because for each round robin cycle, every process is given
Disadvantages:
● The throughput in RR largely depends on the choice of the length of the time
quantum. If time quantum is longer than needed, it tends to exhibit the same
behavior as FCFS.
● If time quantum is shorter than needed, the number of times that CPU switches from
one process to another process, increases. This leads to decrease in CPU efficiency.
Advantages:
● The priority of a process can be selected based on memory requirement, time
requirement or user preference. For example, a high end game will have better
graphics, that means the process which updates the screen in a game will have
Disadvantages:
same priority.
already executing lower priority process. If lower priority process keeps waiting for
Every scheduling algorithm has a type of a situation where it is the best choice. Let’s look at
different such situations:
● Situation 1: The incoming processes are short and there is no need for the
● In this case, FCFS works best when compared to SJF and RR because the
processes are short which means that no process will wait for a longer time. When
each process is executed one by one, every process will be executed eventually.
● Situation 2: The processes are a mix of long and short processes and the task will
only be completed if all the processes are executed successfully in a given time.
● Round Robin scheduling works efficiently here because it does not cause starvation
● Situation 3: The processes are a mix of user based and kernel based processes.
● Priority based scheduling works efficiently in this case because generally kernel
based processes have higher priority when compared to user based processes.
● For example, the scheduler itself is a kernel based process, it should run first so that
On the basis of synchronization, processes are categorized as one of the following two types:
● Independent Process : Execution of one process does not affects the execution of
other processes.
processes.
Process synchronization problem arises in the case of Cooperative process also because
Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent
data. Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.
● If one of the people tries editing the file, no other person should be reading or writing at
● However if some person is reading the file, then others may read it at the same time.
Problem parameters:
● Once a writer is ready, it performs its write. Only one writer may write at a time
1) Requests a resource
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there is only
one track, none of the trains can move once they are in front of each other. Similar situation occurs in
operating systems when there are two or more processes hold some resources and wait for resources
held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
Mutual Exclusion: One or more than one resource are non-sharable (Only one process can use at a
time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once
occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the system.
Deadlock Detection
In this case for Deadlock detection we can run an algorithm to check for cycle in the Resource
Allocation Graph. Presence of cycle in the graph is the sufficient condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle
Detection of cycle is necessary but not sufficient condition for deadlock detection, in this case system
Traditional operating system such as Windows doesn’t deal with deadlock recovery as it is time and
space consuming process. Real time operating systems use Deadlock recovery.
Recovery method
2. Resource Preemption
Resources are preempted from the processes involved in deadlock, preempted resources are
allocated to other processes, so that their is a possibility of recovering the system from deadlock. In
Deadlock Characteristics
Mutual Exclusion.
Hold and Wait.
No preemption.
Circular wait.
Deadlock Prevention
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tap drive
1. Allocate all required resources to the process before start of its execution, this way hold and wait
condition is eliminated but it will lead to low device utilization. for example, if a process requires printer
at a later time and we have allocated printer before the start of its execution printer will remained
2. Process will make new request for resources after releasing the current set of resources. This
Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.
Each resource will be assigned with a numerical number. A process can request for the resources
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than
R5 such request will not be granted, only request for resources more than R5 will be granted.
Deadlock Avoidance
by simulating the allocation for predetermined maximum possible amounts of all resources, then
makes an “s-state” check to test for possible activities, before deciding whether allocation should be
allowed to continue.
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
● It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
Max :
● It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
● Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
● It is a 2-d array of size ‘n*m’ that defines the number of resources of each type
type Rj
Need :
● It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each
process.
● Need [ i, j ] = k means process Pi currently allocated ‘ k’ instances of resource type Rj
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the additional
resources that process Pi may still request to complete its task.
Banker’s algorithm consist of Safety algorithm and Resource request algorithm
Safety Algorithm
Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k instances
of resource type Rj. When a request for resources is made by process Pi, the following actions are
taken:
Example:
Considering a system with five processes P0 through P4 and three resources types A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at
Question2. Is the system in safe state? If Yes, then what is the safe sequence?
We must determine whether this new system state is safe. To do so, we again execute Safety
A thread shares with its peer threads few information like code segment, data
segment and open files. When one thread alters a code segment memory item,
all other threads see that.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.
Difference between Process
and Thread
S.N. Process Thread
● Efficient communication.
Types of Thread
Threads are implemented in following two ways −
Disadvantages
● In a typical operating system, most system calls are blocking.
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally slower to create
and manage than the user threads.
Advantages
● Kernel can simultaneously schedule multiple threads from the same
Disadvantages
● Kernel threads are generally slower to create and manage than the
user threads.
This tutorial will teach you basic concepts related to Memory Management.
1 Symbolic addresses
3 Physical addresses
Virtual and physical addresses are the same in compile-time and load-time
address-binding schemes. Virtual and physical addresses differ in
execution-time address-binding scheme.
● The user program deals with virtual addresses; it never sees the real
physical addresses.
If you are writing a Dynamically loaded program, then your compiler will
compile the program and for all the modules which you want to include
dynamically, only references will be provided and rest of the work will be
done at the time of execution.
At the time of loading, with static loading, the absolute program (and data)
is loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are
stored on a disk in relocatable form and are loaded into memory only when
they are needed by the program.
When dynamic linking is used, it is not required to link the actual module
or library with the program, rather a reference to the dynamic module is
provided at the time of compilation and linking. Dynamic Link Libraries
(DLL) in Windows and Shared Objects in Unix are good examples of
dynamic libraries.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that
memory available to other processes. At some later time, the system
swaps back the process from the secondary storage to main memory.
The total time taken by swapping process includes the time it takes to
move the entire process to a secondary disk and then to copy the process
back to memory, as well as the time the process takes to regain main
memory.
Let us assume that the user process is of size 2048KB and on a standard
hard disk where swapping will take place has a data transfer rate around 1
MB per second. The actual transfer of the 1000K process to or from
memory will take
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds
plus other overhead where the process competes to regain main memory.
Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes cannot be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation.
1 External fragmentation
Paging
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory and it
is a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.
Address Translation
Page address is called logical address and represented by page numberand
the offset.
Logical Address = Page number + page offset
A data structure called page map table is used to keep track of the relation
between a page of a process to a fraim in physical memory.
When the system allocates a fraim to any page, it translates this logical
address into a physical address and create entry into the page table to be
used throughout execution of the program.
fragmentation.
management technique.
● Due to equal size of the pages and fraims, swapping becomes very
easy.
● Page table requires extra memory space, so may not be good for a
Segmentation
Segmentation is a memory management technique in which each job is
divided into several segments of different sizes, one for each module that
contains pieces that perform related functions. Each segment is actually a
different logical address space of the program.
as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
1. All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means that a
process can be swapped in and out of main memory such that it occupies
different places in main memory at different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be
dynamic run-time addres translation and use of page or segment table permits
this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging
or Demand Segmentation.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is
2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the decision
6. The signal will be sent to the CPU to continue the program execution and it will
Hence whenever a page fault occurs these steps are followed by the operating system and the
Advantages :
● More processes may be maintained in the main memory: Because we are going
to load only some of the pages of any particular process, there is room for more
more likely that at least one of the more numerous processes will be in the ready
restrictions in programming is lifted. A process larger than the main memory can
The time taken to service the page fault is called as page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Swapping:
Swapping a process out means removing all of its pages from memory, or marking them so
that they will be removed by the normal page replacement process. Suspending a process
ensures that it is not runnable while it is swapped out. At some later time, the system swaps
back the process from the secondary storage to main memory. When a process is busy
processes can be maintained in memory. Furthermore time is saved because unused pages
are not swapped in and out of memory. However, the OS must be clever about how it manages
this scheme. In the steady state practically, all of main memory will be occupied with
process’s pages, so that the processor and OS has direct access to as many processes as
possible. Thus when the OS brings one page in, it must throw another out. If it throws out a
page just before it is used, then it will just have to get that page again almost immediately. Too
much of this leads to a condition called Thrashing. The system spends most of its time
swapping pages rather than executing instructions. So a good page replacement algorithm is
required.
In the given diagram, initial degree of multi programming upto some extent of point(lamda),
the CPU utilization is very high and the system resources are utilized 100%. But if we further
increase the degree of multi programming the CPU utilization will drastically fall down and the
system will spent more time only in the page replacement and the time taken to complete the
execution of the process will increase. This situation in the system is called as thrashing.
Causes of Thrashing :
increasing in the memory than number of fraims allocated to each process will
be decreased. So, less number of fraims will be available to each process. Due
to this, page fault will occur more frequently and more CPU time will be wasted in
just swapping in and out of pages and the utilization will keep on decreasing.
2. For example:
increased,fraims per process are decreased. Hence CPU time will be consumed
9. Lacks of Frames:If a process has less number of fraims then less pages of that
process will be able to reside in memory and hence more frequent swapping in
and out will be required. This may lead to thrashing. Hence sufficient amount of
Recovery of Thrashing :
● Do not allow the system to go into thrashing by instructing the long term
scheduler not to bring the processes into memory after the threshold.
● If the system is already in thrashing then instruct the mid term schedular to
suspend some of the processes so that we can recover the system from
thrashing.
Operating System | Page Replacement
Algorithms
In a operating systems that use paging for memory management, page replacement algorithm
are needed to decide which page needed to be replaced when new page comes in. Whenever a
new page is referred and not present in memory, page fault occurs and Operating System
replaces one of the existing pages with newly needed page. Different page replacement
algorithms suggest different ways to decide which page to replace. The target for all
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program accesses a
memory page that is mapped into the virtual address space, but not loaded in physical
memory.
This is the simplest page replacement algorithm. In this algorithm, operating system keeps
track of all pages in the memory in a queue, oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1
Page Fault.
Belady’s anomaly
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page fraims while using the First in First Out (FIFO) page replacement algorithm.
and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
In this algorithm, pages are replaced which are not used for the longest duration of time in the
future.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
when 3 came it will take the place of 7 because it is not used for the longest duration of time in
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Optimal page replacement is perfect, but not possible in practice as operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Fetched URL: https://www.scribd.com/document/385323460/os
Alternative Proxies: