0% found this document useful (0 votes)
15 views

OSY CH-3 Updated

Uploaded by

dnyvlogs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

OSY CH-3 Updated

Uploaded by

dnyvlogs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Process Management

3.1 PROCESSES

3.1.1 PROCESS CONCEPT


A process is a program in execution. A process defines the fundamental unit of computation for the
computer.Process is also called as job, task and unit of work. The execution of a process must progress
in a sequential fashion. That is, at any time, at most one instruction is executed on behalf of the
process.Logically each process has its separate virtual CPU. In actual, the real CPU switches from
one process to another. A process is an activity and it has a program, input, output and a state.

Each process has:

1. A Text section that contains the program code.


2. A Data section that contains global and static variables.
3. The heap is used for dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.
4. The stack is used for local variables. A process stack which contains the temporary data such
as subroutine parameters, return addresses, and temporary variables. Space on the stack is
reserved for local variables when they are declared, and the space is freed up when the
variables go out of scope.
5. A program counter that contains the contents of processor's registers.

Fig 3.1: Process in memory

-By Yash Sawant (TYCO)


Process Management
3.1.2 PROCESS MODEL
In process model the operating system is organized into a number of sequential processes, or just
processes for short.

A process is just an executing program, including the current values of the program counter, register
and variables.

Conceptually, each process has its own virtual CPU.

In reality, of course, the real CPU switches back and forth from process to process; thus it is much a
collection of processes running in parallel.

This rapid switching back and forth is called multiprogramming.

In Fig. 3.2 (a), a computer multiprogramming four programs in the memory.

In Fig. 3.2 (b) we can see how this has been abstracted into four processes, each with its own flow of
control (i.e. its own program counter), and each one running independent of the other ones.

In third Fig. 3.2 (c), we can see that viewed over a long enough time intervals, after the processes
have made progress, but at any given instant only one process is actually running.

Fig: 3.2

With the CPU switching back and forth among the processes, the rate at which a process performs its
computation will not be uniform, and probably not even reproducible if the same processes are run
again.

Thus, processes are an activity of some kind. It has a program, input, output and a state. A single
processor may be shared among several processes, with some scheduling algorithm being used to
determine when to stop work on one process and service a different one.

-By Yash Sawant (TYCO)


Process Management
3.1.3 PROCESS STATE [S16, W-15, W-14]

As process executes, it changes states. The state of process is defined in part by the current activity
of that process.

1] NEW

The process that has just been created but has not yet been admitted to pool of executing
processes by OS is called as Process in New State.

It is the new state because the system is not permitted it to enter the ready state due to limited memory
available in the ready queue. If some memory becomes available, then the process from the new state
will go to ready state.

2] READY STATE

When the process is ready to execute and waiting for the CPU for its execution is called as
Process to be in Ready State.

*It is not in the running state because some other process is already running. It is waiting for its turn
to go to the running state.

3] RUNNING STATE

The process which is currently running and has control of the CPU is known as the process in
running state.

In single user system, there is only one process which is in the running state. In multiuser system,
there are multiple processes which are in the running state.

4] WAITING / BLOCKED STATE:

The process that is currently waiting for external events such as an I/O operation is said to be
in blocked state.
-By Yash Sawant (TYCO)
Process Management
After the completion of I/O operation, the process from blocked state enters in the ready state and
from the ready state when the process turn will come it will again go to running state.

-By Yash Sawant (TYCO)


Process Management
5] TERMINATED / HALTED STATE:

The process whose operation is completed, it will go the terminated state from the running state.
In halted state, the memory occupied by the process is released.

3.1.4 PROCESS CONTROL BLOCK


Q] With neat diagram describe use of Process Control Block (PCB) [S16, W-15, W-14, S-17]

PCB is a record or a data structure that is maintained for each and every process. Every process has
one PCB that is associated with it. A PCB is created when a process is created and it is removed from
memory when process is terminated.
A PCB contains several types of information depending upon the process to which PCB belongs. The
information stored in PCB of any process may vary from process to process.
In general, a PCB may contain information regarding:
1. Process Number: Each process is identified by its process number, called process identification
number (PID). Every process has a unique process-id through which it is identified. The Process-id
is provided by the OS. The process id of two processes could not be same because ps-id is always
unique.
2. Priority: Each process is assigned a certain level of priority that corresponds to the relative
importance of the event that it services. Process priority is the preference of the one process over
other process for execution. Priority may be given by the user/system manager or it may be given
internally by OS. This field stores the priority of a particular process.
3. Process State: This information is about the current state of the process i.e. whether process is in
new, ready, running, waiting or terminated state.
4. Program Counter: This contains the address of the next instruction to be executed for this process.
5. CPU Registers: These include index registers, stack pointers and general purpose registers etc.
CPU registers vary in number and type, depending upon the computer architectures. When an
interrupt occurred, information about the current status of the old process is saved in registers along
with the program counters. This information is necessary to allow the process to be continued
correctly after the completion of an interrupted process.
6. CPU Scheduling Information: This information includes a process priority, pointers to scheduling
queues and any other scheduling parameters.
7. Memory Management Information: This information may include such information as the value
of base and limit registers, the page table or the segment table depending upon the memory system
used by operating system.
8. Accounting: This includes actual CPU time used in executing a process in order to charge individual
user for processor time.
9. I/O Status: It includes outstanding I/O request, allocated devices information, pending operation and
so on.
10. File Management: It includes information about all open files, access rights etc.

-By Yash Sawant (TYCO)


Process Management

Process Control Block

SCHEDULING QUEUE

Q) Describe the term: [W 14, S 16, S 17]


1) Scheduling queues
2) Scheduler
3) Context switch

SCHEDULING QUEUE
Scheduling queues refers to queues of processes or devices.
The processes which are ready and waiting for execution are kept on a list called Ready Queue.
The process enters the system from the outside world and is put in the ready queue. It waits in the
ready queue until it is selected for the CPU. After running on CPU it waits for I/O operation by
moving to an I/O queue. Eventually it is served by the I/O device and returns to the ready queue. A
process continues this CPU, I/O cycle until it finishes and then it exits from the system

The above figure shows the queuing diagram of process scheduling.


Queue is represented by rectangular box.
The circles represent the resources that serve the queues.
The arrows indicate the process flow in the system.

SCHEDULERS
Schedulers are special system softwares which handles process scheduling in various ways. A process
migrates between the various scheduling queues throughout its life time. For scheduling purposes, the

-By Yash Sawant (TYCO)


Process Management
operating system must select processes from these queues in some fashion. The selection process is carried
out by the appropriate scheduler. Scheduler is the system program which schedules processes from the
scheduling queues. Their main task is to select the jobs to be submitted into the system and to decide
which process to run.

-By Yash Sawant (TYCO)


Process Management

Schedulers are of three types:


1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler

Working of Schedulers
1. Long Term Scheduler:

It is also called job scheduler.


The long term scheduler determines which jobs are admitted to the system for processing.
In batch system there are more jobs submitted that can be executed immediately.
These jobs are spooled to mass storage device, where they are kept for later execution.
The long-term scheduler selects jobs from this job pool and loads into memory for execution.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound.
When process changes the state from new to ready, then there is use of long term scheduler.

2. Short Term Scheduler:

It is also called CPU scheduler.


The short-term scheduler selects processes from memory, which are ready to execute and allocates the
CPU to one of them,
Short term scheduler also known as dispatcher, makes scheduling decisions much more frequently than
the long-term or mid-term schedulers.
This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a
CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as
"voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU.
In most cases short-term scheduler is written in assembly because it is a critical part of the operating
system.

3. Medium Term Scheduler:

Medium term scheduling is part of the swapping.


The medium term scheduler is in-charge of handling the swapped out-processes.
The medium-term scheduler temporarily removes processes from main memory and places them on
secondary memory (such as a disk drive) or vice versa.
This is commonly referred to as "swapping out" or "swapping in".
The medium- term scheduler may decide to swap out a process which has not been active for some time,
or a process which has a low priority, or a process which is page faulting frequently,.

-By Yash Sawant (TYCO)


Process Management
Q] Differentiate between short term, medium term and long term scheduling. [W 16]

Sr.
long term scheduler Medium term scheduler short term scheduler
No.
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler
It selects processes from ready
It selects processes from job pool and
It selects a process from queue which are ready to
2 loads them into memory for
swapped-out process. execute and allocates CPU to
execution.
one of them.
Access process from swapped
3 Access job pool and ready queue Access ready queue and CPU.
out process queue.
It executes much less frequently It executed whenever swapped It frequently select a new
4 when ready queue has space to queue contains a swapped out process for the CPU, at least
accommodate new process. process. once every100 milliseconds
Speed is less than short term Speed is in between both short
5 Speed is fast
scheduler and long term scheduling
It is almost absent or minimal in time It is a part of time sharing It is also minimal in time
6
sharing system system sharing system
It controls the degree of It reduces the degree of It provides lesser control over
7
multiprogramming multiprogramming degree of multiprogramming

CONTEXT SWITCH
Switching the CPU to another process requires saving the state of the old process and loading the
saved state for the new process.
This task is known as Context Switch.
Using this technique a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the context
switcher saves the content of all processor registers for the process being removed from the CPU, in
its process descriptor.
The context of a process is represented in the process control block of a process.
Context switch time is pure overhead.
Context switching can significantly affect performance as modern computers have a lot of general
and status registers to be saved.
Context switching times are highly dependent on hardware support.

Example
We can run two processes at the same time. When process (Po) waits for an I/O, process (P1)
executes and when process P1 waits for I/O, process P0 executes.
Some time is required for turning CPU’s attention from process P0 to process P1 called context
switching.
After the context switch the old program will remain in the main memory.
The status of CPU registers and the pointers to the memory allocated to this process must be stored.
A specific memory area is used by OS which maintained for each process.
This area is called as register save area which is a part of PCB

-By Yash Sawant (TYCO)


Process Management

OPERATIONS ON PROCESS

1. Process Creation:
Create Process:
Operating system creates a new process with the specified or default attributes and identifier.
Syntax for creating new process is:
create (processed, attributes)
One process which is created may create several new subs processes when it runs by using create
process system call.
The original or main process is called Parent Process and the new process is called child process.
When operating system issues a CREATE system call, it obtains a new process control block from the
pool of free memory, fills the fields with provided and default parameters, and insert the PCB into the
ready list. Thus it makes the specified process eligible to run.

2. Process termination:
A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit () system call. At that point, the process may return a status value (typically
an integer) to its parent process (via the wait () system call). All the resources of the process including
physical and virtual memory, open files, and I/O buffers are deallocated by the operating system.
-By Yash Sawant (TYCO)
Process Management
A process can cause the termination of another process via an appropriate system call. Usually, such
a system call can be invoked only by the parent of the process that is to be terminated.
A parent may terminate the execution of one of its children for a variety of reasons, such as these:

1. The child has exceeded its usage of some of the resources that it has been allocated. (To determine
whether this has occurred, the parent must have a mechanism to inspect the state of its children.)

2. The task assigned to the child is no longer required.

3. The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.

THREADS
Q] Define thread. State any three benefits of thread. 4 M [W 16]

THREAD
A thread, sometimes called a lightweight process, is a basic unit of CPU utilization. A traditional (or
heavyweight) process has a single thread of control. If a process has multiple threads of control, it can do more
than one task at a time, such a process is known as multithreaded process.

Example: A word processor may have a thread for displaying graphics, another thread for reading keystrokes
from user and a third thread for performing spelling and grammar checking in background.

MULTITHREADING
Q] What is multithreading? Explain with suitable diagram. [W 14]

Multithreading refers to the ability to an O.S to support multiple threads of execution within a single
process. Threads share the memory and the resources of the process to which they belong. The benefit
of code sharing is that it allows an application to have several different threads of activity all within
the same address space.

-By Yash Sawant (TYCO)


Process Management

Multithreaded Process

System provides support to both user and kernel threads, resulting in different types of multithreading
models
1) Many to One model
2) One to One model
3) Many to Many model

THREAD BENEFITS
1. Responsiveness: Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user. For example: A multithreaded web browser could still allow user
interaction in one thread while an image is being loaded in another thread.

2. Resource sharing: By default, threads share the memory and the resources of the process to which
they belong. The benefit of code sharing is that it allows an application to have several different
threads of activity all within the same address space. For example: A multithreaded word processor
allows all threads to have access to the same document being edited.

3. Economy: Because threads share resources of the process to which they belong, it is more
economical to create and switch threads, than create and context switch the processes (it is much more
time consuming). For example: in Sun OS Solaris 2, creating a process is about 30 times slower than
is creating a thread (context switching is about five times slower than threads switching).

4. Utilization of multiprocessor architectures: The benefits of multithreading can be greatly


increased in a multiprocessor architecture, where each thread may be running in parallel on a different
processor. Multithreading on a multi-CPU machine increases concurrency.

-By Yash Sawant (TYCO)


Process Management
Q] List & explain various types of multi-threading models. [W 16]

Multithreading models:-

1. Many-to-One
2. One-to-one
3. Many-to-Many

1. Many-to-One: - This model maps many user level


threads to one kernel level thread. Thread management is
done by thread library in user space. The entire process
will block if a thread makes a blocking system call. Only
one thread can access the kernel at a time. Multiple
threads are unable to run in parallel on multiprocessor.

One-to-One: It maps each user level thread to a kernel


level thread. It provides more concurrency than many
to one model. Even one thread makes a blocking call;
other thread can run with the kernel thread. The
drawback of this model is that, when a user thread is
created the corresponding kernel thread should be
created. This reduces the performance of an
application.

Many-to-many: - This model maps many user level


threads to a smaller or equal number of kernel threads.
Number of kernel threads may be specific to either a
particular application or particular machine. This
model allows developer to create as many threads as
user wishes. In this model true concurrency can not be
achieved because kernel thread executes one thread at
a time.

INTERPROCESS COMMUNICATION
Q] Explain interprocess communication. [W 15]
(Any relevant Explanation - 4 Marks)
[**Note: Explanation of interprocess communication with models or without models shall be
considered.]

Inter-process communication: Cooperating processes require an Inter- process communication


(IPC) mechanism that will allow them to exchange data and information.

There are two models of IPC


1. Shared memory

In this model multiple processes exchange data or information through shared region.
All the processes using the shared memory segment should attach to the address space
of the shared memory. All the processes can exchange information by reading and/or
-By Yash Sawant (TYCO)
Process Management
writing data in shared memory segment. The form of data and location are determined
by these processes who want to communicate with each other. These

-By Yash Sawant (TYCO)


Process Management
processes are not under the control of the operating system. The processes are also
responsible for ensuring that they are not writing to the same location simultaneously. After
establishing shared memory segment, all accesses to the shared memory segment are treated as
routine memory access and without assistance of kernel.

2. Message Passing

In this model, communication takes place by exchanging messages between cooperating processes.
It allows processes to communicate and synchronize their action without sharing the same address
space. It is particularly useful in a distributed environment when communication process may reside
on a different computer connected by a network. Communication requires sending and receiving
messages through the kernel. The processes that want to communicate with each other must have a
communication link between them. Between each pair of processes exactly one communication link.

(i) Naming:
Processes which wish to communicate with each other need to know each other with the
name for identification. There are two types of communications :

1. Direct Communication
2. Indirect Communication

In direct communication each process that want to communicate must explicitly use name for
the sender as well as receiver while communication.
In this type the send( ) and receive( ) primitives are defined as follows:
Send(P,message) – Send message to process P
Receive (Q, message) – Receive a message from process Q.

-By Yash Sawant (TYCO)


Process Management
In an indirect communication the messages could be send or receive from mailboxes or ports.
A mailbox can be viewed as an object in which messages could be kept or even removed.
Each mailbox is associated with the unique number.

-By Yash Sawant (TYCO)


Process Management
In this type the send( ) and receive( ) primitives are defined as follows:
Send(A,message) – Send message to mailbox A.
Receive (A, message) – Receive a message from mailbox A.

(ii) Synchronization:
Process Synchronization means sharing system resources by processes in such a way that,
concurrent access to shared data is handled.
Synchronization minimizes the chance of inconsistent data.
Communication between the processes takes place through the system calls.
OS has to maintain proper synchronization between the sending and receiving processes.
Message passing may be blocking or non-blocking, also known as synchronous and
asynchronous.

 Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox.
 Blocking receives: The receiver blocks until a message is available.
 Non-blocking send: The sending process sends the message and resumes operation.
 Non-blocking receive: The receiver retrieves either a valid message or a null.

(iii) Buffering:
The communication could be direct or indirect.
The message exchanged by the communicating processes resides or stores in a temporary
queue.
The OS will buffer the messages into the buffers that are created in the system Address space.
A sender’s message will be copied from the sender’s address space to the next free slot in the
system buffers.
From this system buffer, the messages will be delivered to the receiver process in FCFS order
when receiver process executes receive calls.

-By Yash Sawant (TYCO)


Process Management
CRITICAL SECTION PROBLEM
Q] Describe the critical-section problem. [S 17]
Each process contains two sections. One is critical section where a process may need to access common
variable or objects and other is remaining section containing instructions for processing of sharable objects
or local objects of the process. Two processes cannot execute their critical sections at the same time. Each
process must request for permission to enter inside its critical section. The section of code implementing
this request is the entry section. In entry section if a process gets permission to enter into the critical
section then it works with common data. At this time all other processes are in waiting state for the same
data. The critical section is followed by an exit section. Once the process completes its task, it releases
the common data in exit section. Then the remaining code placed in the remainder section is executed by
the process.

-By Yash Sawant (TYCO)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy