OSY CH-3 Updated
OSY CH-3 Updated
3.1 PROCESSES
A process is just an executing program, including the current values of the program counter, register
and variables.
In reality, of course, the real CPU switches back and forth from process to process; thus it is much a
collection of processes running in parallel.
In Fig. 3.2 (b) we can see how this has been abstracted into four processes, each with its own flow of
control (i.e. its own program counter), and each one running independent of the other ones.
In third Fig. 3.2 (c), we can see that viewed over a long enough time intervals, after the processes
have made progress, but at any given instant only one process is actually running.
Fig: 3.2
With the CPU switching back and forth among the processes, the rate at which a process performs its
computation will not be uniform, and probably not even reproducible if the same processes are run
again.
Thus, processes are an activity of some kind. It has a program, input, output and a state. A single
processor may be shared among several processes, with some scheduling algorithm being used to
determine when to stop work on one process and service a different one.
As process executes, it changes states. The state of process is defined in part by the current activity
of that process.
1] NEW
The process that has just been created but has not yet been admitted to pool of executing
processes by OS is called as Process in New State.
It is the new state because the system is not permitted it to enter the ready state due to limited memory
available in the ready queue. If some memory becomes available, then the process from the new state
will go to ready state.
2] READY STATE
When the process is ready to execute and waiting for the CPU for its execution is called as
Process to be in Ready State.
*It is not in the running state because some other process is already running. It is waiting for its turn
to go to the running state.
3] RUNNING STATE
The process which is currently running and has control of the CPU is known as the process in
running state.
In single user system, there is only one process which is in the running state. In multiuser system,
there are multiple processes which are in the running state.
The process that is currently waiting for external events such as an I/O operation is said to be
in blocked state.
-By Yash Sawant (TYCO)
Process Management
After the completion of I/O operation, the process from blocked state enters in the ready state and
from the ready state when the process turn will come it will again go to running state.
The process whose operation is completed, it will go the terminated state from the running state.
In halted state, the memory occupied by the process is released.
PCB is a record or a data structure that is maintained for each and every process. Every process has
one PCB that is associated with it. A PCB is created when a process is created and it is removed from
memory when process is terminated.
A PCB contains several types of information depending upon the process to which PCB belongs. The
information stored in PCB of any process may vary from process to process.
In general, a PCB may contain information regarding:
1. Process Number: Each process is identified by its process number, called process identification
number (PID). Every process has a unique process-id through which it is identified. The Process-id
is provided by the OS. The process id of two processes could not be same because ps-id is always
unique.
2. Priority: Each process is assigned a certain level of priority that corresponds to the relative
importance of the event that it services. Process priority is the preference of the one process over
other process for execution. Priority may be given by the user/system manager or it may be given
internally by OS. This field stores the priority of a particular process.
3. Process State: This information is about the current state of the process i.e. whether process is in
new, ready, running, waiting or terminated state.
4. Program Counter: This contains the address of the next instruction to be executed for this process.
5. CPU Registers: These include index registers, stack pointers and general purpose registers etc.
CPU registers vary in number and type, depending upon the computer architectures. When an
interrupt occurred, information about the current status of the old process is saved in registers along
with the program counters. This information is necessary to allow the process to be continued
correctly after the completion of an interrupted process.
6. CPU Scheduling Information: This information includes a process priority, pointers to scheduling
queues and any other scheduling parameters.
7. Memory Management Information: This information may include such information as the value
of base and limit registers, the page table or the segment table depending upon the memory system
used by operating system.
8. Accounting: This includes actual CPU time used in executing a process in order to charge individual
user for processor time.
9. I/O Status: It includes outstanding I/O request, allocated devices information, pending operation and
so on.
10. File Management: It includes information about all open files, access rights etc.
SCHEDULING QUEUE
SCHEDULING QUEUE
Scheduling queues refers to queues of processes or devices.
The processes which are ready and waiting for execution are kept on a list called Ready Queue.
The process enters the system from the outside world and is put in the ready queue. It waits in the
ready queue until it is selected for the CPU. After running on CPU it waits for I/O operation by
moving to an I/O queue. Eventually it is served by the I/O device and returns to the ready queue. A
process continues this CPU, I/O cycle until it finishes and then it exits from the system
SCHEDULERS
Schedulers are special system softwares which handles process scheduling in various ways. A process
migrates between the various scheduling queues throughout its life time. For scheduling purposes, the
Working of Schedulers
1. Long Term Scheduler:
Sr.
long term scheduler Medium term scheduler short term scheduler
No.
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler
It selects processes from ready
It selects processes from job pool and
It selects a process from queue which are ready to
2 loads them into memory for
swapped-out process. execute and allocates CPU to
execution.
one of them.
Access process from swapped
3 Access job pool and ready queue Access ready queue and CPU.
out process queue.
It executes much less frequently It executed whenever swapped It frequently select a new
4 when ready queue has space to queue contains a swapped out process for the CPU, at least
accommodate new process. process. once every100 milliseconds
Speed is less than short term Speed is in between both short
5 Speed is fast
scheduler and long term scheduling
It is almost absent or minimal in time It is a part of time sharing It is also minimal in time
6
sharing system system sharing system
It controls the degree of It reduces the degree of It provides lesser control over
7
multiprogramming multiprogramming degree of multiprogramming
CONTEXT SWITCH
Switching the CPU to another process requires saving the state of the old process and loading the
saved state for the new process.
This task is known as Context Switch.
Using this technique a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the context
switcher saves the content of all processor registers for the process being removed from the CPU, in
its process descriptor.
The context of a process is represented in the process control block of a process.
Context switch time is pure overhead.
Context switching can significantly affect performance as modern computers have a lot of general
and status registers to be saved.
Context switching times are highly dependent on hardware support.
Example
We can run two processes at the same time. When process (Po) waits for an I/O, process (P1)
executes and when process P1 waits for I/O, process P0 executes.
Some time is required for turning CPU’s attention from process P0 to process P1 called context
switching.
After the context switch the old program will remain in the main memory.
The status of CPU registers and the pointers to the memory allocated to this process must be stored.
A specific memory area is used by OS which maintained for each process.
This area is called as register save area which is a part of PCB
OPERATIONS ON PROCESS
1. Process Creation:
Create Process:
Operating system creates a new process with the specified or default attributes and identifier.
Syntax for creating new process is:
create (processed, attributes)
One process which is created may create several new subs processes when it runs by using create
process system call.
The original or main process is called Parent Process and the new process is called child process.
When operating system issues a CREATE system call, it obtains a new process control block from the
pool of free memory, fills the fields with provided and default parameters, and insert the PCB into the
ready list. Thus it makes the specified process eligible to run.
2. Process termination:
A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit () system call. At that point, the process may return a status value (typically
an integer) to its parent process (via the wait () system call). All the resources of the process including
physical and virtual memory, open files, and I/O buffers are deallocated by the operating system.
-By Yash Sawant (TYCO)
Process Management
A process can cause the termination of another process via an appropriate system call. Usually, such
a system call can be invoked only by the parent of the process that is to be terminated.
A parent may terminate the execution of one of its children for a variety of reasons, such as these:
1. The child has exceeded its usage of some of the resources that it has been allocated. (To determine
whether this has occurred, the parent must have a mechanism to inspect the state of its children.)
3. The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
THREADS
Q] Define thread. State any three benefits of thread. 4 M [W 16]
THREAD
A thread, sometimes called a lightweight process, is a basic unit of CPU utilization. A traditional (or
heavyweight) process has a single thread of control. If a process has multiple threads of control, it can do more
than one task at a time, such a process is known as multithreaded process.
Example: A word processor may have a thread for displaying graphics, another thread for reading keystrokes
from user and a third thread for performing spelling and grammar checking in background.
MULTITHREADING
Q] What is multithreading? Explain with suitable diagram. [W 14]
Multithreading refers to the ability to an O.S to support multiple threads of execution within a single
process. Threads share the memory and the resources of the process to which they belong. The benefit
of code sharing is that it allows an application to have several different threads of activity all within
the same address space.
Multithreaded Process
System provides support to both user and kernel threads, resulting in different types of multithreading
models
1) Many to One model
2) One to One model
3) Many to Many model
THREAD BENEFITS
1. Responsiveness: Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user. For example: A multithreaded web browser could still allow user
interaction in one thread while an image is being loaded in another thread.
2. Resource sharing: By default, threads share the memory and the resources of the process to which
they belong. The benefit of code sharing is that it allows an application to have several different
threads of activity all within the same address space. For example: A multithreaded word processor
allows all threads to have access to the same document being edited.
3. Economy: Because threads share resources of the process to which they belong, it is more
economical to create and switch threads, than create and context switch the processes (it is much more
time consuming). For example: in Sun OS Solaris 2, creating a process is about 30 times slower than
is creating a thread (context switching is about five times slower than threads switching).
Multithreading models:-
1. Many-to-One
2. One-to-one
3. Many-to-Many
INTERPROCESS COMMUNICATION
Q] Explain interprocess communication. [W 15]
(Any relevant Explanation - 4 Marks)
[**Note: Explanation of interprocess communication with models or without models shall be
considered.]
In this model multiple processes exchange data or information through shared region.
All the processes using the shared memory segment should attach to the address space
of the shared memory. All the processes can exchange information by reading and/or
-By Yash Sawant (TYCO)
Process Management
writing data in shared memory segment. The form of data and location are determined
by these processes who want to communicate with each other. These
2. Message Passing
In this model, communication takes place by exchanging messages between cooperating processes.
It allows processes to communicate and synchronize their action without sharing the same address
space. It is particularly useful in a distributed environment when communication process may reside
on a different computer connected by a network. Communication requires sending and receiving
messages through the kernel. The processes that want to communicate with each other must have a
communication link between them. Between each pair of processes exactly one communication link.
(i) Naming:
Processes which wish to communicate with each other need to know each other with the
name for identification. There are two types of communications :
1. Direct Communication
2. Indirect Communication
In direct communication each process that want to communicate must explicitly use name for
the sender as well as receiver while communication.
In this type the send( ) and receive( ) primitives are defined as follows:
Send(P,message) – Send message to process P
Receive (Q, message) – Receive a message from process Q.
(ii) Synchronization:
Process Synchronization means sharing system resources by processes in such a way that,
concurrent access to shared data is handled.
Synchronization minimizes the chance of inconsistent data.
Communication between the processes takes place through the system calls.
OS has to maintain proper synchronization between the sending and receiving processes.
Message passing may be blocking or non-blocking, also known as synchronous and
asynchronous.
Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox.
Blocking receives: The receiver blocks until a message is available.
Non-blocking send: The sending process sends the message and resumes operation.
Non-blocking receive: The receiver retrieves either a valid message or a null.
(iii) Buffering:
The communication could be direct or indirect.
The message exchanged by the communicating processes resides or stores in a temporary
queue.
The OS will buffer the messages into the buffers that are created in the system Address space.
A sender’s message will be copied from the sender’s address space to the next free slot in the
system buffers.
From this system buffer, the messages will be delivered to the receiver process in FCFS order
when receiver process executes receive calls.