Kud Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Operating Systerns

Chapter I
Introduction

1) Define Operating Systems. 2011, 2012, 2013. 2014. 2015, 2016|


Ans:- Operat1ng Systems is a system software that manages softwate the sofrwarr and

hardware resources
Ex: Windows, Linux., OS
Software resources memory managemeni. process, file management etc
Hardware resources HDD, RAM, LO devices

2) Explain different (various) operating systems. 2011. 2012


A) Distributed Svstem locatsons, to work co-
-
This allows numbers of computcTs at goographically scparatc
have own opetattg
operatively Data processing divded anong different machines, they
systems mteil1gent
where several work station PC's called
-This represents a configuration in

notes( sitcs) are intcrConncctcd with


mainframe and man1 cornputcTs via cormucato
envitonment
network is called distr1buted computung

Major Objective of D.S avaulable resostce of


user at one site may be able to access the
i) Resource sharing:-
another site
An overloaded sutes computatuonal
tasks can be shafted to
i) Load sharing/balancing:-
load at present
sutes havang luttie or no work
an executed by other remote sutes and remaunng
sate need Dot aflect other
ii) Reliable operations:- fa1lure of one
siles can continue operat1ng comaunic ate so
at dufferent sules can
iv) Communication:- processes executng
exchange any useful unfo
2011, 2012/
B) Real Time Distributrd Sytem in whach on cvent takcs place Rcal tame computer
-
The real tume recfers to actual tume
time OS
systcms loaded with RTOS, a real
are
between systems O S
-
Hcre time Bonds made ot crilaca aptcaio
rigid and or precis ume r e q u r e t
-
designed
RTOS s to manage
time bound actuvities
t aplcatc su a
to manage
-Real tume Systems are requied
orbt
to be place m space
monitorng a sateilite of tute Luts
auk
a gctt
perform correCT CUmputatuons
Real systems are abie
to
have to be dan
reli2ble for whacih they
Real systems are required
to highly
fixed umc restnctoas

EN: Raiway Resernvation systems, Rociei Launchang


Operating Systems
Chapter 1
Introduction

1) Define Operating Systems. |2011, 2012, 2013, 2014, 2015, 2016


Ans:-Operating Systems is a system software that manages software the software and
hardware resources
Ex: Windows, Linux, IOS
Software resources: - memory management, process, file management etc.
Hardware resources: - HDD, RAM, VO devices.

2) Explain different (various) operating systems. 2011, 2012


A) Distributed Svstenm
-This allows numbers of computers at geographically separate locations, to work co
operatively. Data processing divided among different machines, they have own operating
systems.
-This represents a configuration where in several work station PC's called intelligent
notes(sites) are interconnccted with manframe and mini computers via communication
network is called d1stributed computing environment.

Major Obiective of D.S:


i) Resource sharing:- user at one site may be able to access the available resource of
another site.
ii) Load sharing/balancing:- An overloaded sites computational tasks can be shifted to
an executed by other remote sites having little or no work load at present.
ii) Reliable operations:- failure of one site need not affect other sites and remaining
sites can continue operating.
iv) Communication:- processes executing at different sites can communicate to
exchange any useful info.

B) Real Time Distributed System: 12011, 2012


-Therealtime reefers to actual time in which on event takes place. Real time computer
systems are loaded with RTOS, a real time O.S.
- Here time Bonds made between systems O.S.
-
RTOS is designed to manage rigid and or précis time requirement of critical application.
-Real time Systems are required to manage time bound activities in application such as
monitoring a satellite to be place in space orbit.
Real systems are able to perform correct computations at given amount of time limits.
Real sysiems are required to highly reliable for which they have to be ecompliant with
fixed ime restrictions
EX: Railway Reservation systems, Rocket Launching

Written By Mahesh, Nitin, Sunil


Operating Sstens

System Services in details.2013, 2014. 2015, 2016


IDifferent Operating
3) Explain time exccution): Accomplish the task
o1 load1ng

Ans:- 1) Program eecution( Run


partitions
program
nto
main mcmory
Initate program
of after successtul eNCcuton
termination program
P'rovide tor normal
Accomplish the task ol device allocation
and control
io devices
i) T0 operations:-
provide tor notitying errors, devIces statiis etc
the task of opening a lile,
) File svstem manipulation (Handling):- Accomplish
clasing thec file ete
and provide tor creat1ng. deleting the liles.
cithcr on

IN)Communications:- Accomplish the task of inter-process communications


compuleT System on a computer network
the same or betwcen diflerent
access in sale mode
memory
Provide for massage passing and sharcd
by how many
the task of tecord keeping the system usuge
) Accounting: - Accomphsh
users and for how long
(duration) for bill1ng and accounting purpOSes
and error recovery
system activities tor perlormance analysIs
Maintains log
detection and recovery if any For
vi) Error detection:
-

Accomplsh the task of error

instance paper jam on the printer


elc
keep track of CPU, memory. 1O's device, storage device, ile system networking
by zero
in case ol ariuhmetic overllow,
diide
Keport and or deliver error massages
effors

task of allocation to mult1ple jobs


vi) Resource Allocation:- Accomplish the
resourece

and when the job terminates


Reclaim the allocated resource alter their use or as

lor sale computing by employing security scheme


vii) Protecting the sy stems:- Provide
gainst unauthorized access/ useIS.
Authenticate legitimute users with login pussword and registrations

4) Eaplain Components of Operating Systems details. [2011, 2012, 2015, 2016


Ans- 1) Process management:- Process management relers
to Process management Ina
Process management process will be allocated to process
A process IS running program, il may be in one of the three stage Ready, Running.
cked
A program (called Job or Task in OS lerminology) iIs a sequence ot that are to be

executed by CPU processor to yield desired output

Page 2 47 tten By Mahesh, Nitin, Suni.


Witten By Mahesh, Nitin, Sunil.

Operating Systems
ii) Memory Management:- The major function operating system is to
resources of
computers. Out of which main
manage the
modern computers system memory plays key role in the operation of a
Main memory is
repOsitory of nun-time programs and the relevant data being accessed
and shared by CPU or processor l O devices

ii) FileManagement:- File is collection of related records about an entity defined by is


file
a
creator, object program, txt dala, error report etc
e x a data file may
sequence of characters that can be alphanumeric,
pneumonic alphabetie
File management designed lo provide an unilom log view
ile management allow creation and deletion of a file
Tle management provides back-up a file tor permanent storage for their fealure
use ete
iv) /O's System management:- It is referred to as device
als
device management or simply
management These devices include Keyboard, mouse, joystick, seanner ele
All these devices have can have different physical
magnetic)
characteristies (analog, digital,
This are classilied as 2 groups 1)
/O device 2) Secondary Storage device
1) 1/0 device:- device drivers to control
and or to manage device characteristic
TOinterrupt
TOtrallic controller (program)
2) Secondary Storage:- thus are like Magnetic tape, caridge etc back up the
dala
expensive
main
memory lor addituonal storage requiremenis
Various Functions
Allocaluon of Disk space
Disk scheduling (n case of share access slorage devices)
Management of the disk space (avoiding bad block oI storage space)

V) Network Management:- When compulers are interconnected via communKation


inks have
we

syslems as well
compuler communicalion network like IAN WAN and distribuled
Network opeiating sy>lcm (NOS) ulso munage networks rouling, control data tualtie

Vi) Protecting Systens:- To manage the resources ot a compuler a operating system


Also provides lor protcctiun nechanism
With muliuprogrammung tume shanng sysicms might exccute nunber of prowesse
cOncuitently
Various lunctuons of O5 are
Provide conlrol acLss lo the resource ol a compuler
a
syslem
a mechan1sm lor detectng hdden or dormant erors
Prov ide
Provide prolection schenes to distinguish between authorized and unauthornzed access
of the resuurce ot P

written By Manesh Nt Sunil


Operating Systems
Vii) Command Interpreter
user
System:- The
interface that determine the interaction primitive
unction of O.S is
support and
to
with programmers operators or end-useTS
-M.S DOS and UNIX OS have the
layer called Shel.
Types of User Interfaces:
a)GUI (graphical user interface):- This are user interface
execution by moving commands are selected for
-GUl are
placing pointer on a tiny
graphic images are called images.
need
b)CUI
no
remember and type the command line characters
to

(command line interface):- When a command are typed


displayed on the screen at comment.
on
keyboard and
Ex: c:> dir

5) Explain Operating System calls and services.


Ans:- The various
2011, 2012, 2014
types of System calls that can be offered by an O.S under each of these
categories:
a) System calls for Process management:-
Create, Terminates a process
>Load, Execute a process
End, Abort a process
Allocate, realise a process
Wait event, signal event

b) Systemn calis for File management/ File


Open, close a file
manipulation:
>Create, delete a file
Read, write a file
Get file. set a file

c) System calls for device management:


Request, realize an /O device
Atach a device logically (mouning)
Read,/ write device

d) System calls for Information maintains/management:


Get date, tume, and system data
Set dalc, timc, system data
Get date, time, device attributes

e) System calls for Communication management:


Create, delete connection
Send, receive massages
Transfer status informaton

Written By Mahesh Nitun, Sunl


Operating Systems
System services:
i) Language Translator: + interpreters (for ex: java interrupts, basic interpreters)
+ Compilers (for ex: turbo c compiler, Borland ct+ compiler)
+ Assembler (for ex: MASM, TASM, assembler for 8086 microprocessor)

ii) Loaders and Linkers:- for program loading execution. The system programs such as
Absolute loaders, re-lockable loaders.
-Linkage, editors, linkers, overlay loaders help assets placing (loading) the assembled
and/or compiler code (object program) into main memory and transfer CPU control to the
first instructors at which the program execution has to begin.

ii) Text editors for file modification and file manipulation: A system programs such
as screen cditors, line editors, Morton editors and several other text editors

iv) Application program (system utilities for command user needs): A system
program has applications or utilities such as world processor (M.S world), web browser
(net scope navigators, internet explorer (1E) M.S Excel)
DBMS packages (ADD) for plotting

V)System programs for communication and status reporting: - Certain system


program has been developed to encourage. Browsing a web page, remote computers
login, remote file transfer, sending & receiving massages like e-mail facilities.
Ex UNIX has computing environment

6) What are Loosely Coupled and Tightly Coupled systems (Parallel systems)"
2013
Ans:-i) Loosely Coupled:- In this system each & every processors has separate memory
(11s own memory). The processors can communicale with each other, but individually
processors can directly access its own 'Local Memory
- Loosely Coupled systems employs asymmetric muluprocessng which cach slave
processor (CPU) 1s assigned a speciflic lask and execule user job in parallel, where as a
sungle master processor (CPU) control & co-ordinate actuvities of the slave processor
- It is poss1ble to assign several slave processor (CPU) to execule single user job in
Parallel, in such case given job is broken into Modules

PU CPU CPU
Slave Slave Slave
processor procssor
processor

Man e t i o r
Mau euory Man ueuory Ma niemory
Fgure T Loosely-coupled Asymuneruc Multuprocessng archiecture of Paralel System

Written By Mahesh, Nitin, Sunu


Operating Systems
there own

can have a maximum


up to 256 CPU's. Accessing
-

Loosely Coupled systems


local memory.
shares a common
all processors
ii) Tightly Coupled sy stems: I n these systems, Concurrently
memory and each processors can run identical copy of operat1ng systems.
giving raises to Symmetric multiprocessing (SMP). and there
mult1processing, all processor (CPU)
are peers (equal status)
-

With Symmetric
muluprocessor
Master-slave relationsh1p among
ISs no
* * : : - = :

CPU
.---
MAIN MEMORY
architecture of Parallel System
Tightly Coupled System Multiprocessing
shares
-Even it can perform parallel execution of a single user job. Since processor
common memory and executes simultancously
have a max1mum of 16 pees CPU's acccssing
Tightly Coupled parallel system can

common memory, there are also know as shared memory parallel compulers

7) Explain concept of Multiprogramming and Time sharing.


to be executed
Ans:- A Muluprogramming system permits several job (programs)
are executed.
Simultaneously. In muluprogramming several jobs
Multi programmed OS offers a convenient run-time environment and other suppot
functions such that concurrent execution of jobs (programs).
A Single job system runs one user job at a time and performs Serial processing
The general goal of multuprogramming is to make the most efticient use of computer

system
Muluprogramming cvents makes OS to take certain decisions for the benefits of users
and bring the visibility to end-users

Time Shariag - Itis the logical extension of the multiprogramming. It also reiers Multi
tasking lts goal is to provide good response time to inleractive sessions of users
A Lume shared OS offers interactuve access time to numbers of end-users,
sumultaneously
T ume Sharnng support on line data processing and provides mechanism for concurTent
execution of jobs
It cconomically feas1ble duc lo several users inleractive access and almst no waslage of
CPU ume

tten By Mahesh, Nitin, Suni


Operating Systems

Functions Of timesharing
it uses CPU schedul1ng and multiprogramming to provide each user with small portal of
ime shared computer terminal on time slice priority basis.
-it provides Multitasking and provides rapid user interactive with their termi1nals
Effectivesharing and quick response
- High degree of user interaction

8) What are Virtual Machines? Explain the benefits. 2013


Ans:- The hierarchical structure of O S can be extended to provide the end-user and
operating system, with an illusion of running on separate extended machines. This
extended machine is sometimes referred to as Virtual Mach1nes In essence, the resource
of the computcr system is shared to create virtual mach1nc(VM's)
A layer of VM OS called virtual machines monitor( VMM) exist & encapsulate real
machine hardware in association with kernel

proceSses

processes

processes

programmin9
kernel kernel kernel
interface
VM1 VM2 VM3
virtual m a c h i n e

manager
hardware

Advantages / Benefits of Virtual Machines.


) Using separate virtual machine provides higher degree of protection of various
resource of system
2) Virtual Machine allows systems development (R and D) activitues to take place
without aflecting the normal system operaton
3) Difterent operating systems can run concumently to serve the vary1ng needs of normal
system operation
virtual mach1ne OS IBM's VM O.S.
4) Example of is

5) This concept gives rise to flex1b1lities

written By Mahesh, Nitin, Suni

Page 7 47
Operating Systems
Chapter 2
Process Management
1) Define a process. 12011, 2014, 20151
Ans- The process carn be defined program in execution. Sometimes it referred to as Task.
It represents user job.
The process corresponds to more synthesized
version ofjob in OS environment.
-
Since process is always active or dynamic as it represents running state of program or
user job.
-
Here an active entity resides inside the Main Memory and finite number of sequence of
steps in execution.
-
It can in any one of the Ready, Running and Block states
2) Explain the process states and operations of processes. 2011, 2014, 2015, 2016|
Process States:
new admitted interrupt exit terminated
ready running
VO or event completion scheduler dispatcnVO or event wait
waiting
1) Ready 2) Running 3) Terminated 4) Blocked or Waiting
1)Ready The process 1s on the toes control over the CPU or Processer. That is, a ready
process is wailung to be assigned to a processor for continuing its exeeution
2) Running:- The process actually execuling its instruction by CPU or processor
3) Terminated:- The process has completely executed all instructions
4) Blocked or Waiting: - The process must wait lor some event to occur
Written By: Mahesh, NItin, Sunil
Operating Systems
3) Explain in details about PCB (process control 2014, 2015, 2016
block).
Ans
pointer process
state
p r o c e s s number
program c o u n t e r
registers
memory limits
list of o p e n files
A process switches between the running, ready and blocked states, many times, well
before its formal termination.
Each time when a particular process leaves the nunning state and yet does not encounter
its flag end, then its current state of status must be saved for feature reference.
a) Process state:- indication of the process state as Ready, Running, blocked or waiting
etc
b) Process number:- Suggest finds this process distinctly as process ID
c)Program counter:- it is register containing address of next instruction to be executed
of this Process
d) Registers:- A group of registers viz accumulators (A or AC) index registers, stack
pointers general purpose registers and condition codes.
e) CPU Scheduling information: - It suggests the details about the priority of the
process, poinlers to ready queue scheduling
Memory Management Information: - It suggests the details about Memory Usage by
this process
g) Accounting Information:- il suggest the details such as, the allocated CPU time-slice
thereal-ume used, process numbers, time limits and account number etc.
b) VO device status information- it suggest the details about allocated devices for this
process, list of files opened for this process etc.
4) What is Scheduler? Explain different types of Schedulers.2013, 2014, 2015, 2016
Ans:- A processes Transit between various scheduling queues (job queue) while
switching across the running, rcady and wailing (blocked) states
s the job of process management component ofOS o select an appropriate process
from these queues (called seheduling) And ransler the CPU control to such selected
process (called Dispatchng)
Written By Mahesh, Nitin, Sun
Operating Systems
- The sole responsibility of selecting a process (for execution) in accordance with some
scheduling policy-is being assigned to an OS module or to a component of process
management called the Scheduler.

3tvpes ofscheduler programs:


1) Long-term scheduler (Job scheduler)
2) Short-lerm scheduler (CPU scheduler or process scheduler)
3) Medum-term scheduler

Job Scheduler. Long-term scheduler


) Long-term scheduler:- Is also referred to as
into main memory for
loads every ume a new job or a program (from a hard disk)
the availability of
execution The frequency of load1ng programs depends heavily on
space in main memory
-In other words, Long-term scheduling carry out the selection of processes that couv turn

be allowed to contend for processor assignment.

minimal 1/O
2) CPU-bound process V's 1/0-bound process: A CPU-bound process has
instructions, so that the process spends its allocated time-slice almost completely
request
in order to carry out the numerical computations.
with the physical prOcessor
instructions so that the
Where 1/0-bound process has maximum I/O request
as an
in doing 1/0 operations rather
than
more of ts allocated ume-slice
process spends
carry1ng out computations rarely with the physical processor

Short-term Scheduler:- 1s also referred to as CPU scheduler process


or scheduler
3) from among the already
selects an
actually appropriate process
The short-term scheduler
in main memory, for execution
and allocates the CPU
loadcd processes
of
Medium-term Scheduler:- give nise to an
intermediate-level of scheduling in case

4)
UNIX OS based systems
tume-shar1ng systems l1ke hard disks) some
medum-term schedulers swaps-out (to
send back to
-Accordingly, partitions
partially execuled processes rom main memory and
Swapping clminalc overhcad introduced
due to high degreeof muluprogramming
I/0 bound processes in freed ma1n memory
a
allow for good mixture
of CPU-bound and
|2011, 2012, 2014, 2016
5) What Co-operating Process? Eaplain briefly. environment can be classilied
are

in the operaling syslcm


Ans: The processes executing
into two 1ypes
1. Independent process
2. ooperatiug process and it will
It does share any dala with any other process
not
1) lodependent process:- in the system
other execuling processes
neithet affect nor be altected by any
and can directly share
It share data with other prucesses
2) C-operating process: -
logical address space in Memory
Mañesh, Nitn, Sund
itten By

Pane 10 47
Operating Systems
-Cooperat1ng processes communicate each other via two communication schemes shared
memory and message systems.

Co-operating Processes extend the following benefits:


)Information sharing:-several users can access a shared file simultaneously.
2) Computational Speed Up: Parallel executionof sub-tasks in Multiprocessor systems
3) Modularity:- Modular design to system functions as distinct processes or separate
threads
4) Convenience:- User can perform at his/her convenience the functions such as editing a
file, printing a file, program compilation ete., in an overlapped manner

6) Explain lnter-process Communication? 2011, 2012, 2013, 2014, 2016


Ans:- Co-operat1ng processes must communicate cach other. For which the operating
system provide a mechanism called Inter- process communication facility (IPC).
The IPC mechanism allows co-operating processes to communicate each of other and
synchronize their actions without shar ing the same address space
Two Types:
1) Shared Memory System: - In this scheme, cooperating processes exchange
information via shared memory variables and the OS provides shared memory
2) Message System:- In this scheme, cooperating processes communicate each other by
Cxchanging messages (message passing system). In this case, the OS has to bear the
responsibility of providing necessary communication among cooperal1ng processes
- IPC scheme provides: quite helipful in distributed computing environment whey in the
communicating processes are spread over across geographically distuributed computers
(intelligent nodes) and connected over network links
Also, IPC mechanism is best supported by both message systems and message-pass1ng
system

7) Explain CPU scheduling Criteria. |2011, 2012, 20151


Ans:- CPU scheduling in general depends upon the system of priorilies
a) CPULilization: The pumary objeclive is to keep CPLU or processor the busiest one
umong ull other resources of a computer system
The idle Lume of CPU can be min1mized to almost zero in case of muluprogramning
and Time-sharnng systems
Iypically. CPU Uulzaton may range from 0% to 100% But in most practical cases it
would be between 40" lo 90% only

b) Throughpu: Suggests that CPU must perium maximum computational tasks in the
shortest nterval of tume
Ifthe CPU bound process Is complex und ume-consumng then the system throughput
may be just I or 2 process completion per unit of iume
However, throughpul rale nmay increase up to 10 ur 20 process completion per unit of
ume. provided the pruc esses involvcd in execution must be sumple and short

Written By Mahesh Nitin, Sund


Operating Systems
c) Turn around Time:- Defines the time elapsed between the time of submission of a
process or job by a user and the time of completion of that process of job.
-Turn around time also includes waiting time in the ready queue, úme spent over 1/0
completion, waiting to get into memory, waiting time for child process completion etc..

d) Response Time: - Defines the time it takes to commence responding to a request or a


command It's not the time interval to complete the task from the point of submission.
-
The best criterion for an interactive time-sharing system is to minimize expected
response time.

e) Waiting Time Suggest the time spent while waiting for I/0,comp tion, waiting for
CPU in ready queue etc., The sum total of such time quantum's expended while waiting
for various subsidiary events to occur or to complete contribute towards increasing Turn
around time.

8) Explain Scheduling Algorithms in brief. 2011, 2012, 20151


Ans CPU scheduling or interchangeably Process scheduling is the activity that govem
"switching CPU control" among various competing processes in accordance with some
scheduling policies
-
The scheduling algor1thms deals with the problem of choosing one process al a time
from ready queue, so as to allocate the CPU to it, based on certain considerations such as
shortest job, first-comc, first-serve, round robin, system of priorities.
-
We shall list out few scheduling algorithms to h discussed as follows

) FCES-FirstCome, Firs-Served Scheduling:- Name itself indicates, the allocated


process
which comes first or which is the first comes the head of ready queue, will be
the CPU
- Whencver a process enters the ready queue requests the CPU, Is PCB is linked at the
end or tail of the queuc
- Assuming the process enlering the ready queue in the sequential order pl,p2,p3... and
are allocaled CPU in the FCFS order

P P2 26 28

0 milliseconds
-So the gantt chart suggest that waiting ume is

- So, average waiting ume i s - = l6 milliseconds

the process ready


b) SJF Shortest Job First Seheduling: Name itself indicates,
-
in

next CPU burst time' will be allocated the


CPU.
queue having the "Shortest

12
Written By Mahesh, Nitin, Sunil.
c) Turn around Time:- Defines the time elapsed between the time of submission of a
process o job by a user and the time of completion of that process of job.
Turn arvumd time also inchudes waiting time in theready queue. time spent over 1/0
compietion. waiting to get into memory, waiting time for child process completion etc

d) Response Time: - Defines the time it takes to commence responding to a request ora
command t's not the time interval to complete the task from the point of submission
The best criterion for an interactive time-sharing system is to minimize
expected
response time.

e) Waiting Time Suggest the time spent while waiting for l/0 completion, waiting for
CPU in ready queuc etc.. The sum total of such time quantum's expended while
waiting
for varous subsidiary events to occur or to complete contribute towards increasing Turn
around tme

8) Explain Scheduling Algorith ms in brief. 2011, 2012, 2015|


Ans - CPU scheduling or interchangeably Process
scheduling is the activity that govern
"switch1ng CPU control" among various competing processes in accordance with some
scheduling policies
The schedulng algorithms deals with the problem of choosing one process at a time
from ready queue, so as to allocate the CPU to it, based on certain considerations such as
shoriest job, first-come, first-serve, round robin, system of priorities
We shall iist out few scheduling algorithms to h discussed as follows:

aFCES-Fir-Come,Firs-Served Scheduling:- Name itself indicates, the process


which comes first or which is the first comes the head of ready queue, will be allocated
the CPU
Whencver a process enters the ready queue requests the CPU, Its PCB is linked at the
cnd or tail of the
qucuc
Assuming the process entering the ready queue in the sequential order pl.p2.p3... and
are aliocaied CPU in the FCFS order

P
22 26 28

So the ganti chart suggest that wailing ume is 0 milliseconds

-So avciage
waiing lume s 16milliscconds
b) SJF Shoriest Job First Scheduling: - Name lseli mdicales, the process in eady
queue having the Shoriesl next CPU burst úme' will be allocated he CPU

12
WILten By. Mahesh, Nitin, Sun.
Operating Systems

selecting shortest job for allocating the CPU


next, in case of long-
This would help in

lerm job schedul1ng in a batch system


p3, p4... like FCFS
the process enters in sequential order pl, p2,
in
Although
scheduling. the process will be executed the order p4,
in pl, p3, p2..
Now respective waiting t1me for individual process
in ready queue waiting to get CPU
assignment shown below

P P Pa P2
16 26

+2+8+16 6.5 milliseconds.


Average waiting time is =

be
c)PR-Priority Scheduling: -
Name itself indicates, the participating processes
will be allocated
will
CPU. to
assign prionty and each time process with highest priority
Assignment as per FCFS scheduling
Accordingly the respective waiting times for individual process in ready queue waiting
to get CPU assignment shown using Gantt chart as follows:-
PR1 PR:2 PR:3 PR:4 PR:5
P P P P
6 11 19
22
Now average waiting time is = 0+2+6+11+19/5 = 7.6 milliseconds

Conclusion
Prnoity scheduling can be implemented as either primitive or non-primitive algorithms
-

Priornty cab be assigned can be defined as either internal or extermal priority


d) RR Round -Robin Scheduling: 2014
Name itself indicates, the Round Robin
Scheduling suggest the CPU scheduler
-

to go
round the rcady queue and allocatle CPU to each process on FIFO basis
Working of Round Robin
Scheduling The Round Robin Scheduling algorithm is pre
emptive Accord1ngly, if the curently running process has longer CPU burst tume
excceding time quantum
12-5-7 0 0

0
P P P P P
2 17 22
21

So, average waitung ume tor RR schedulng tune


is=7mulliseconds

Written By Mahesh Niun Sunl


Operating Systems
Advantages of RR scheduling 1) For time sharing systems and multi user system.
2) Offers good response and turnaround time.

e)
MLO-Multilevel Qucuc Schcduling: 2013, 2015|
- MLO scheduling partitions are ready queue of processers into many different queues
hased on certain considerations, such as Foreground or Interactive processors and Batch
processors
MLO schedul1ng algorithms permits different scheduling schemes to be employed for
several different classes of processors
- Further MLO scheduling scheme permits among these ditferent ready queues and is
umplemented by 'Fixcd- priority preemptive' schecduling algorithm.

Advantages of MLQ scheduling algorithms-1) Low scheduling overhead


2) Fixed priority scheduling
3) Each queue has its own scheduling
algorithm
MLFQ-Multilevel Feedback Queuescheduling: - This allows processes to switch
from one queue type to other
-A procss belong1ng to higher priority queue. Starts consuming much of CPU time, then
it can be shificd down to
lower-priority queue
Number of queues and type of scheduling algorithms used for each queue type
Method used to suggest when to upgrade a low-priority process to higher priority queue

Advantages: - 1) Most flexible CPU scheduling algorithm.


2) It permits processors to move between queues belonging to various
priorities.
9) What is Threads? 20151
Aas A thrcad represeat small execution code segment associated with a proçess
T hread is a light-weight process (LWP). in its primitive fom a thread represent basic
unit of CPU ulilizaliOn
There is one program counter and one sequence ofinstructions that can be camed out
al any given limc
As cach thread has ils own independent resource for process execulion, muluple
processcs can be cxccutcd parallel by ncreasing number of threads

14
Written By Mahesh, Nitin, Suni
Operating Systems

Advantages of Threads
Responsiveness: Speedy response to users.
Resource sharing: hence allowing better utilization of resour
Economy: - Creating and managing threads becomeseasie
Scalability: One thread runs on one CPU. In Multithreaded processes, threads
-

can be distnbuted over a series of processors to scale.

Multithreading Models
The user threads must be mapped to kernel threads, by one of the tollowing strategies.

the many-to-one model, many user-level threads are all


a)Manv-To-One Model: -in
mapped onto a single kernel thread
-Thread management is handied by the thread library in user space, which is eflicient in
nature

model creates a separate kernel thread to handle


b) One-To-One Model: -The one-to-one

each and every user thread


-Most impiementations of this model place a l1mit on how many threads can be created

c) Manv-To-Many Model: -The many-to-many model multiplexes any number of user


or smaller number of kernel threads, combining
the best features of
threads onto an equal
the one-O-one and many-to-one models.
-Users can create any number of the threads

10) Expiain Context Switching Concept in detail. 2012, 2013


the status
Ans:- CPU 1s be switched between processes, it is necessary to save
about to
of curTently running process soon alter t is blocked for
1/0 or its allocaled
iniormatuon

Lime-slice expires
hus, once afler saving the stalus of previously running process in ils coTespOnding
Process
PCB the dispalcher then switches the CPU control by execuling the LPS (Load
to the
Stale) load the status information saved in the PCB corresponding
instruclion Lo

process selected (and dispatched) a


fresh for execulion
This lask of saving the process stale (PCB) of the old process
and reloadng the
new process beng scheduled
tor execution-is
prcviousiy saved process sale (PCB) of the
Lct mcd as
Untext Switch" or Switching Contexi

Sunil
Witten by Maiesh, Nitan,
Operating Systems
Chapter 3
Process Synchronization and Deadlocks

1) What Race Condition? Explain two process


solutions. 2013, 2015
threads access a shared variable at the same
Ans: A race condition occurs whenever two
time
where two concurrent
-

Race Condition is a situation on concurrent programming


on who gets the recourses first.
threads or processes & the resulting final state depends

The two process solutions are: a) Critical section problem


b) Semaphores
to solve Critical Section
2) Explain Critical Section Problem? Explain requirements
Problem. 2012, 2014
waiting to
Ans: AsSSuming the given system contains 'n' numbers of ready processors
be assigned to CPU. For instance
S(Pi)= ;PO.PI,P2,...Pn
The process shares the data & are said to be co-operat1ng sequential process.
a) Entry section b) Exit section c) Remainder section

The segment of code called Entry Section of process Pi


must request the OS permission
Lo enter in its own critical section
activities such. as
-

The segment of code called critical section perform the critical


shared global variables, writing and/or rewriting,
accessing and/or modifying the
appending updat1ng a file, a table of values etc.
follows eritical section and
-The segment of code called Exit section of process Pi
concludes lormal termination of critical section code (zone)
The segment of code called Remainder section of process.
must be
-The cntucal-section problem suggests that the critical section
code ofa process
In use

Requirements to befuifilled for providingsolution to critical-section problem:


to be mulually exclusive
P2) said
Mutual Eaclusion: - Two processes (say Pl and are
will not atlect each other's
-If execution of them (both Pl and P2) at the same tume

address space-code section

renander
-

Progress:- Assuming that if proccsses arc not cxccutung both n cUucal and
to which one will enter ts cntical
secuon
sectons, then these processes can competc as

next (Process xcculion-progress Iequircment)

Wrtten By Mahesti, Nitn, Sund


Operating Systems
3) Explain Semaphores? 2013, 2014, 2015, 2016
Ans:- A more generalized solution to the problems of synchronization and critical section
problems can be given with the help of a synchronization toil called Semaphores
-A semaphore S is a synchronizing variable that can hold an integer value Accordingly.
it can be initial1zed to a
specific integer
A semaphore as a control variable can be accessed via two standard atomic operations
- Each of these two operations accepts a single argument S

P (for and can be defined


Wait (s):-This operation is originally termed as wait to test)

with the following program segment


Wait (S)

While(S0)

signal (S):- This operation is originally temed as V (for signal to increment) and can
be defined with followIng program segment
signal(s)
s-S+1; increment

modification of semaphore value S by


Semaphores docsn't permit the simultaneous
morethan one process, at a time
can be synchronized by
Therefore two or more concurrently executing processes
a common semaphore
variable.
shar1ng

) what is Region? Explain Readers


Critical
andproblem.|2011,
writers 2012, 2013|
Critical Regions Critical regions are one of the
fundamental high level
Aus:
referred to as "Conditional Critical Region'.
synchronization construct, sometimes to incorrect use of
crntucal construct elum1nate s1mple errors that occur due
-The regions
Cntical-Section Problen
semaphores as a sol utions to solution to the critical sectuon
when semaphores are uscd to provide
-For nstance
it is observed in general, the lollowing
rcquirements
problem nutex
All concurent piocesses share a semaphore vanable
before enier1ng cntical
a) section
execute wait (mutex) well
b) Each & every pr Ocess must

and signal (mutex) theteaftet

do
wait (mulex)

CTitucal seclon

written y Mahesh Niti Suni


Operating Systems
signal (mutex)
-

Consider the following


situation
Observe that order of execution of
wait and signal
variable mutex has been operations on the semaphore
the one shown interchanged in a
process and the execution sequence look like
below
Signal( mutex)
critical section

wait (mutex);

Readers and Writers Problem:


-

The Readers and Writers


problem highlights concurrent execution of both Reader
processes and writer
processes by sharing a common data file.
Accordingly, the reading processes called Readers in short, can
-

access the shared data


file only for Reading.
- Where as the writing
processes called Writers in short, can access the shared data file
for possible updating i.e., insert a new
record, delete a record, mod1fy existing record
pend a Tecord to the end of the file etc.
The necessary
synchronization can be achieved by employing mutex semaphore in the
reader procm code and write (mutual exclusion
semaphore for writers)
For instance
First Readers Writers Problem- (Preference
will be for Readers) Here, no reader
process is made to wait has not obtained any prior permission to access shared file. The
reader can proceed to Read
execute its
operations.
Second Readers-Writers Problem : (Preference will be for writers) lere, no new
Teader process is allowed to access the shared file, once the writer is waitng and ready to
access the shared data file for wiung
5) Eaplain the Dining - Philosophers Problem: 12014, 2015|
Ans:- The
Dinng PhilosophcTS problem highlights an
- ot large c
example lass
cOncuiicey - control problem where n , t represents need to allocate sevetal resouices
along nunbet of cODcurent processors
Ihe Dinung Plhilosophet ploblem suggests thal àl least 5 plh1losophersc n sil acIOss a
LOiino dining Lable wheie in circular dinng table 1s surrounded by S chaus Each of
the chails can be occupied by indiv idual philosoplhets

WIIttet By Ma?es Nil un


Operating Systems
The Dining Philosophers Problem (Deadlock)
Philosophers can
either thenk or eat
Rice Plate

Plates
If all want to eat
at the sarrie tirme
Phiosophers take a deadlock ocurrs
the ieft chopstick
first then the right
chopstick and eat

Chopsticks Philosophers

Now the problem starts as when philosophers became hungry and would like to eat
rice
when philosophers are busy in thinking about ethics, research etc. they never interact
with cach other But they would attempt at least to pick-up two chopsticks or spoons that
arc adjacent to their plates
i n other words, synchronization problem exist and intern it may lead to deadlock
&Starvation
To
conclude, Dining Philosopher problem could be solved by employing semaphores
-

called 'chopsticks". This solution provides the necessary synchronization

6) Explain Monitors. 2011, 2012


Ans:-it is another fundamental high-level synchronization construct. Using monitorsthe
synchronization mechan1sms can be implemented for sharing abstract data types
(variables)

General Svatax of Monitor


Monitor name
shared variables declaralion
Procedure plt)
ProccdurC p2
Proccdure ji)
Inatializalion codc

19
WIntten By Mahesi Nitn, Sunil
Operating System
Body of the monitor construct consist of shared variable declarations and a set of one or
more user-defined functions or procedures that represent various aperations to be
performed on these abstract data types (i.e., on shared variables).
-

In other words, the monitor is characterized by a set of user-defined or programmer


defined operations that are represented via functions

7) is
What Deadlock?
Explain the
necessary for its occurrence.
12015, 2016
Ans: Deadiock is a situation in which two or more processors are waiting on same
resources that are held by each other, this stage is called Deadlock.
Deadiock situation can occur in a community of co-operating processes or among
competing processes that need exclusive access to one or more resources of a computer
system
-

Deadiock may be the side effects of synchronization techniques


-

in essence the need for inter-process communication among several co-operating


processes gives rise to the need for synchronization.
- As we know that, a deadlock situation presents a problem or scenario where in two or

more processes (P1, P2, P3, etc.) get in to a hanging state (hold or locked or blocked
state) such that each process is holding a resource that its adjacent or neigh buring
process is requestung the same.

Necessary conditions to Occurs Dead-Lock


a) Mutual exclusion: - This states that, there is a proper understanding between two

processors.
At least one resource is held in a non-sharable mode that is only one process at a time
can use thc resource. f another process requests that resource, the requesting process
must be dclayed until the resource has been
released
b) Hold & wait:- One processor holding one resources. Another processor must wait
There must exist a process that is holding at least one resource and is waiting
toacquire
addilional resources thal are currently being held by other
processes
N o preemption:- It doesn't go back for scheduling, it will wait until gets complete.
Resouces cannot be preempted that is, a resource can only be released voluntarily by
process bolding after the process has completed its task
the it,
d) Circular wait:- Two or Muliple processor waiting for subscquent process, this
Siluation is called as 'Circular Wait
Therc must cxist a set tp0. pl paf of waiung processes such that p0 is waiting for a
teource which is held by pl. pl is waiting for a resource which is held by p2 pa-I is
wailing or a resourcec which is held by pa and pn is waiting for a resource which is held
by p

Dcadiuck Detection
Ihe deadlock avoidance approach avoid the 'Unsafe States' although systen nught
TCcOver Irom themn

Written By Maiesin Nitui, Sutil


Operating Systcms
When Deadlock
now
Detecetion and Recovery techniques employed, the system does
are
attempt to prevent deadlocks from Occurring. Rather, it ailows deadlocks to occur
tnes to detect them
Therefore, the detection and recovery strategy must provide for the system the
following two algorithms
An algornthm to monitor the state of the system that verify and confirm whether the
deadlock has occurred [Detection]
An algorithm to recover from Deadlock
[Recovery]
-

Deadlock Detection Algorithm for Single Instance of each resource


type.
This algorithm makes use of variant of resource - allocation graph known as a wait -for
graph The deadlocked state occurs in the system if and only if the wait for graph
contains a cycle
-

Deadlock Detection algorithm for multiple instances of each resource


type.
-
This algorithm makes use of tume-varying data structures that are similar to those
involved in banker's algorithm.
-The algorithm checks for every
possible sequence of allocations for processes that are
not completed
The algorithm is matrix based (the current allocation matrix C and
-

request matrix R)
depends upon comparing these vectors with available resource vector A
-One altemative approach is to invoke detection algorithm every
request is to be processed. This method is expensive nterns of considerable overhead on
ime when a resource

CPU ume

-An alternative strategy is check for deadlock detection at every k


to
(say once in 60)
minutes or
perhaps only when the CPU utilization drops down below some
throughput
(say below 50 percent)

Recovery from Deadlock


-Once the deadlock detection algorithm determines that a deadlock has been detected in
system, the system will have to be recovered from deadlock
a
-

which the crude approach is to abort or kill one or


For
more processes to break the
dcadlock or circular wait
in a sccond
approach one or more prOcesses will have to be preenpted,
-

their resources that would unblock the


other deadlocked prOcesses
thus releasin8
We shall discuss and elaborale
the
following 3 recovery approaehes:
on
1) Recovery thuough killing processes |Process Temination
2) Recovery through Preemptuon |Resource
3) Recovery Preemptuon
through Checkpoint Roli back mechanism |Check
pontung
Recovery through Killing Processes |Process
sluaighitorwald appioach is to abort or kill all lerinatiun A unple and
processes by reboung the i a hne

WILllei By Malst Nitiu Su


Operating Systems
To summarized, the following methods used for recovery:
Abort or Kill all deadlocked processes.
Abort or Kill one process.
circumstances
2) Recovery through Preemption |Resource Preemption|:- Under some
its possible to pull back an allocated resource from its currently executing process.
But the ability to take away from the allocated resource forcibly well before the
termination of currently holding process.

- Deadlock recovery with Resource preemption would call for 3 issues


a) Selecting a victim b) Roll back c) Starvation

Deadiock Presentation:
a) Elimination of Mutual Exclusion
b) Elimination Hold and wait
c) Elimination of No-preemption
d) Elimination circular wait.

8) Explain Banker's Algorithm. 2011, 2012, 2014


Ans:- The banker's algorithm is the best known of the Deadlock avoidance strategies
This algorithm is first introduced by Mjkstra in 1965.
The classic model of a state used in the deadlock avoidance strategy comes from
Dikstra's analogy ofresource allocation (1968)
rs a scheduling algorithm used for resource-allocation and deadlock avoidance

5ankers algorithm
cach resource type.
is applicable to resource allocation systems with multuple instances of

The simple assumplion is such that bank must satisfy all of its customers by leading
money to each and every customer on the basis of a line of credit
We know that a bank can have a designated amount of cash at àny time subject to its
transacion load at thal particular town or city
A line of credit is an
agreement on max imum claim on resources by the processes In
banking environment a ine of credit 1s an agreement by the bank

itten By Mahesh Nitn, Sund


Operating Systems
Chapter-4
Memory Management
1) What is Memory
Management?
Ans:- Main Memory is a repository to hold the run-time program instruction and data
operands to be operaied upon. The function of Memory Management is to keep the track
of parts of main memory that are in use, allocated memory blocks or segments.
2) Explain Page Replacement Algorithms:
Ans: As we know that Demand Paging offers extremely large virtual memory to
mmers without
constraining the actual size ot physical main memory
Page replacement policy or algorithm has to determine as to which page from physical
must be removed (or
main memory swapped-out to hard disk).
ore. page replacement is fundamental to Demand Paging. In other-words,
cessful implementation of Demand Paging scheme would call for now, to develop the
following algurithmns.
Frame-Allocation Algorithm (Fetch: and Placement Poliey)
2 Page-Replacement Algorithms (The
Replacement policy)
With
muluprogramming. the multiple processes must be allocated main memory
Simultaneously
-The Frame Allocation has to decide how many frames to allocate at a time to each
process parlucipating in execution
There is
good number of
page-replacement Algorithms Sometimes, it qunte easy to
discuss and or narrate the best possible page replacement algorithm
The various page replacement algorithms listed below:
a) First-in First-uut: FiFO page replacement algorithm.
FFOpagc icplacement algorithm replaces the page that has been resident n man
Ecmory lor the longer imc
For m which he OS mantains a
separale queue of pages that äre all resident un
physcal nain memory franes The oldest page is at the head of queue (liust-n)
iFOalgonthm is simple o mplement but it lunctional behavioi cannot be
inLorpoalcd uf applicabie lo most of the programs it is ndependent ol locality
SoFO kind of page
replacement
will increase page-fault and slow-down proces
CAULution
ptiumalpage eplaceueat algorithu
plnal Pagc icplauencu algutilluis suggcst that On a page lault ieplace lhe p4ge
wiucii will iot be ustd cletecd
ui
a l isitivse
paE Eeltig Itceucul icheict es necd nol be Ieplaccd SomchUW, oi
sthouid iltni. alsul d
fpage Icleicuce which will iot be ieleienced say uiiiil t0
pefhaps sti uc liv lalet T00
Therhl ali oltial pags iust i thosCh to be wapped vul lvni a n y
WIitlei ty Mahe sh, Nilu, Suuil

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy