Os Unit-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

UNIT –II:

Process and CPU Scheduling - Process concepts and scheduling, Operations on


processes, Cooperating Processes, Threads, and Interprocess Communication,
Scheduling Criteria, Scheduling Algorithms, Multiple -Processor Scheduling. System
call interface for process management-fork, exit, wait, waitpid, exec.

PROCESS CONCEPTS
PROCESS:

A process is basically a program in execution. The execution of a process must progress in a sequential
fashion. A process is more than the program code, which is sometimes known as the text section.

Process generally also includes


- Process Stack, which contains temporary data (such as function parameters, return addresses, and local
variables)
- Data Section, which contains global variables.
- Heap, which is memory that is dynamically allocated during process run time.

A process is an active entity, with a program counter specifying the next instruction to execute and a set
of associated resources. A program becomes a process when an executable file is loaded into memory.

Two Techniques for process execution:

- Double-clicking an icon representing the executable file


- Entering the name of the executable file on the command line (as in prog. exe or a.out.)
PROCESS STATE:

As a process executes, it changes State. The state of a process is defined in part by the current activity of
that process. Each process may be in one of the following states:

• New - The process is being created.


• Running - Instructions are being executed.
• Waiting - The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
• Ready - The process is waiting to be assigned to a processor.
• Terminated – The process has finished the execution.

PROCESS CONTROL BLOCK

Each process is represented in the operating system by a Process control block (PCB) - also called a
task control block. It contains many pieces of information associated with a specific process, including
these:

PCB keeps track of processes by storing information about various things like their state, i/o status and
CPU scheduling.

• Process state - The state may be new, ready, running, waiting, halted, and so on.

• Program counter - The counter indicates the address of the next instruction to be executed for this
process.

• CPU registers - The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-
code information.
• CPU-scheduling information - This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.

THREADS
Thread is a single sequential flow of execution of tasks of a Process. It is often referred to as a
lightweight process.

There can be more than one thread inside a process where each thread of the same process makes use of
a separate program counter and a stack of activation records and control blocks.

Since threads use the same data and code, the operational cost between threads is low. Creating and
terminating a thread is faster compared to creating or terminating a process.

Context switching is faster in threads compared to processes.


COMPARISON OF PROCESS AND THREAD IN OS

Comparison Process Thread


Basis

Definition A process is a program under A thread is a lightweight process that can


execution i.e an active program. be managed independently by a
scheduler.

Context switching Processes require more time for Threads require less time for context
time context switching as they are more switching as they are lighter than
heavy. processes.

Memory Sharing Processes are totally independent and A thread may share some memory with
don’t share memory. its peer threads.

Communication Communication between processes Communication between threads


requires more time than between requires less time than between
threads. processes .

Blocked If a process gets blocked, remaining If a user level thread gets blocked, all of
processes can continue execution. its peer threads also get blocked.

Resource Processes require more resources than Threads generally need less resources
Consumption threads. than processes.
Dependency Individual processes are independent Threads are parts of a process and so are
of each other. dependent.

Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.

Treatment by OS All the different processes are treated All user level peer threads are treated as
separately by the operating system. a single task by the operating system.

Time for creation Processes require more time for Threads require less time for creation.
creation.

Time for Processes require more time for Threads require less time for termination
termination termination.

OPERATIONS ON PROCESSES
The processes in the system can execute concurrently, and they must be created and deleted
dynamically. Thus, the operating system must provide a mechanism (or facility) for process creation
and termination.

Process Creation:
A process may create several new processes, via a create-process system call, during the course of
execution. The creating process is called a parent process, whereas the new processes are called the
children of that process. Each of these new processes may in turn create other processes.
When a process creates a new process, two possibilities exist in terms of execution:

1. The parent continues to execute concurrently with its children.


2. The parent waits until some or all of its children have terminated.

Process Termination:
A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit system call. At that point, the process may return data (output) to its parent
process (via the wait system call). All the resources of the process-including physical and virtual
memory, open files, and I/O buffers-are deallocated by the operating system.

Termination occurs under additional circumstances. A process can cause the termination of another
process via an appropriate system call (for example, abort). Usually, only the parent of the process that
is to be terminated can invoke such a system call. Otherwise, users could arbitrarily kill each other's
jobs. A parent therefore needs to know the identities of its children. Thus, when one process creates a
new process, the identity of the newly created process is passed to the parent.

COOPERATING PROCESSES
The concurrent processes executing in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other processes
executing in the system. Clearly, any process that does not share any data (temporary or persistent) with
any other process is independent.

We may want to provide an environment that allows process cooperation for several reasons:

Information sharing: Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to these types of
resources.
Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others. Such a speedup can be achieved only if the computer
has multiple processing elements (such as CPUS or I/O channels).

Modularity: We may want to construct the system in a modular fashion, dividing the system functions
into separate processes or threads.

Convenience: Even an individual user may have many tasks on which to work at one time. For instance,
a user may be editing, printing, and compiling in parallel.

Concurrent execution of cooperating processes requires mechanisms that allow processes to


communicate with one another and to synchronize their actions.

INTERPROCESS COMMUNCTION (IPC)


Another way to achieve the same effect is for the operating system to provide the means for cooperating
processes to communicate with each other via an Interprocess communication (PC) facility.

IPC provides a mechanism to allow processes to communicate and to synchronize their actions without
sharing the same address space. IPC is particularly useful in a distributed environment where the
communicating processes may reside on different computers connected with a network.

Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

An operating system can implement both methods of communication. An example is a chat program
used on the World Wide Web. IPC is best provided by a message-passing system, and message systems
can be defined in many ways.
PROCESS SCHEDULING
• The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
• The objective of time sharing is to switch the CPU among processes o frequently that users can
interact with each program while it is running.
• The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
• The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs.
• To meet these objectives, the Process Scheduler selects an available process for program
execution on the CPU.

Types of Process Schedulers:


A scheduler is a type of system software that allows you to handle process scheduling.

There are mainly three types of Process Schedulers:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler

Long term scheduler is also known as a Job scheduler. This scheduler regulates the program and
select process from the queue and loads them into memory for execution. It also regulates the degree of
multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O
jobs., that allows managing multiprogramming.

Medium-term scheduling is an important part of swapping. It enables you to handle the swapped
out-processes. In this scheduler, a running process can become suspended, which makes an I/O request
and can also resume the processes.

Short term scheduling is also known as CPU scheduler. This helps you to select from a group of
processes that are ready to execute and allocates CPU to one of them. The dispatcher gives control of the
CPU to the process selected by the short-term scheduler.
Scheduling fell into one of the two general categories:

PREEMPTIVE SCHEDULING: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state. (CPU Forcibly Interrupts the process)

Note: whenever the high priority process wants to execute, then the running low priority process can be
preempted and processor is assigned to that high priority process.

NON-PREEMPTIVE SCHEDULING: Non-Preemptive scheduling is used when a process terminates,


or when a process switches from running state to waiting state (Process terminated voluntarily).

Note: Once the process is allotted with CPU, it keeps the CPU time until it completes its task or moves
to the waiting state.

Scheduling Queues:

• All processes, upon entering into the system, are stored in the Job Queue.
• Processes in the Ready state are placed in the Ready Queue.
• Processes waiting for a device to become available are placed in Device Queues. There are
unique device queues available for each I/O device.

SCHEDULING CRITERIA
CPU scheduling is the process of determining which process or task is to be executed by the central
processing unit (CPU) at any given time. It is an important component of modern operating systems that
allows multiple processes to run simultaneously on a single processor. The CPU scheduler determines
the order and priority in which processes are executed and allocates CPU time accordingly.

CPU scheduling criteria is as follows:

CPU Utilization:
The main objective of CPU scheduling algorithm is to keep the CPU busy. CPU utilization
varies from 49% to 90% depending on the load upon the system.
Throughput:
Throughput is the number of processes being executed and completed per unit time. Throughput
varies depending on the length of each process.
Turnaround time:
The time elapsed from the time of submission of a process to the time of completion is known as
the turnaround time. It is the total time spent by the system on waiting and executing.
Response time:
Response time is the time taken from the submission of a process request until the first response
is produced.
Waiting time:
Waiting time is the time spent by the process in the waiting queue.
Arrival Time: Time at which the process arrives in the ready queue
.
Completion Time: Time at which process completes its execution.

Burst Time: Time required by a process for CPU execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time = Turn Around Time – Burst Time

CPU Scheduling takes place in four circumstances-

1. Process switches from running state to waiting state.

2. Process switches from running state to ready state (for example, when an interrupt occurs).

3. Process switches from waiting state to ready state (for example, at completion of I/O).

4. Process gets terminated.

Scheduling Queues:

JOB QUEUE – As process enter the system, they are put into a job queue, which
consists of all processes in the system.

READY QUEUE – The processes that are residing in main memory and are ready
and waiting to execute are kept on a list known as ready queue.
SCHEDULING ALGORITHMS
Scheduling algorithms are used to solve the problem of deciding the set of the processes in the ready
queue that has to be allocated the CPU time.

Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready
for launch. The selection process is done by a temporary (CPU) scheduler. The Scheduler selects
between memory processes ready to launch and assigns the CPU to one of them.

❖ First come First serve Scheduling


❖ Shortest Job first Scheduling
❖ Priority Scheduling
❖ Round Robin Scheduling
❖ Multi level Queue Scheduling
❖ Multi level feedback Queue Scheduling

First come First serve Scheduling:


• By far the simplest CPU scheduling algorithm.
• Process which requires CPU first, will be allocated and is managed easily by FIFO queue.
• When the process arrives the ready queue, its PCB is linked to tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the queue.
• It is always non-preemptive in nature.
• Once CPU has been allocated to a process, it holds the CPU until it releases the CPU either
by terminating or by requesting I/O.

Shortest Job first Scheduling:


• SJF allocates the CPU to the process which has the smallest CPU-bursts time.

• Suppose, if the CPU bursts of two processes are the same, then we use FCFS scheduling to
break the tie.

• SJF is used frequently in long term scheduling but it cannot be implemented in short-term
scheduling.

• SJF can either be a preemptive or non-preemptive Scheduling algorithm.


• Easy to implement in Batch systems where required CPU time is known in advance

• Best approach to minimize waiting time.

Priority Scheduling:
• Out of all the available processes, CPU is assigned to the process having the highest priority.
• In case of a tie, it is broken by FCFS Scheduling.
• Priority Scheduling can be used in both preemptive and non-preemptive mode
• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
• The main drawback of this scheduling algorithm is indefinite blocking or starvation (this
algorithm can leave the low priority processes waiting indefinitely).

Round Robin Scheduling:


• Round Robin Scheduling is mainly designed for time shared systems. This algorithm defines
that each process is executed by the fixed amount of time called time quantum.

• The time quantum or time slice is a small unit of time which is generally range from 10 to
100 millisecond.

• The process is executed within the time slice. After that the process is preempted and allows
the other process to execute for the given time slice.

• If time quantum is too large, its result is same as a FCFS scheduling. If time quantum is too
low, the processor through put is reduced because more time is spent on context switching.

• It is always preemptive in nature.

Multi level Queue Scheduling:


• This scheduling algorithm is created for areas in which we classify process into different
groups such as Foreground (interactive) and Background (batch).
• It partitions the ready queue into several separate queues in which processes are permanently
assigned to one queue.
• Priority of foreground processes are higher than background processes.
• For example, the foreground queue might be scheduled by the Round Robin algorithm,
while the background queue is scheduled by an FCFS algorithm.
• This scheduling helps the computer system manage its tasks more efficiently by prioritizing
them based on their importance and using different scheduling techniques for different types
of tasks.
• Processes are then assigned to the appropriate level based on their characteristics, such as
their priority, memory requirements, and CPU usage.

Multi level feedback Queue Scheduling:


• This algorithm allows a process to move between the queues.
• Main idea is to separate the processes with different CPU burst time.
• If a process uses too much CPU time, will be moved to lower priority queue.
• If any process waits for long time in a lower priority queue, then it is moved to highest
priority queue.
• This form of aging prevents Starvation.
• Main Features include multiple queue, Priority adjusted dynamically, time slicing, Feedback
mechanism, Preemption.
MULTIPLE-PROCESSOR SCHEDULING
In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes
possible. However multiple processor scheduling is more complex as compared to single processor
scheduling. In multiple processor scheduling there are cases when the processors are identical, in terms
of their functionality, we can use any processor available to run any process in the queue.

Approaches to Multiple-Processor Scheduling:

There are two approaches to multiple processor scheduling in the operating system: Symmetric
Multiprocessing and Asymmetric Multiprocessing.

Symmetric Multiprocessing: It is used where each processor is self-scheduling. All processes may be
in a common ready queue, or each processor may have its private queue for ready processes.

Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O processing are
handled by a single processor called the Master Server. The other processors execute only the user
code.

Processor Affinity:

Processor Affinity means a process has an affinity for the processor on which it is currently running.
When a process runs on a specific processor, there are certain effects on the cache memory. The data
most recently accessed by the process populate the cache for the processor.

1. Soft Affinity: When an operating system has a policy of keeping a process running on the same
processor but not guaranteeing it will do so, this situation is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may
run.
Load Balancing:

Load Balancing is the phenomenon that keeps the workload evenly distributed across all processors in
an SMP system. Load balancing is necessary only on systems where each processor has its own private
queue of a process that is eligible to execute.

1. Push Migration: In push migration, a task routinely checks the load on each processor. If it
finds an imbalance, it evenly distributes the load on each processor by moving the processes
from overloaded to idle or less busy processors.
2. Pull Migration: Pull Migration occurs when an idle processor pulls a waiting task from a busy
processor for its execution.

SYSTEM CALL INTERFACE


A system call is a mechanism that provides the interface between a process and the operating system. It
is a programmatic method in which a computer program requests a service from the kernel of the OS.

System call offers the services of the operating system to the user programs via API (Application
Programming Interface). System calls are the only entry points for the kernel system.
System calls for Process management
• A system is used to create a new process or a duplicate process called a fork.
• The duplicate process consists of all data in the file description and registers common. The
original process is also called the parent process and the duplicate is called the child process.
• The fork call returns a value, which is zero in the child and equal to the child’s PID (Process
Identifier) in the parent. The system calls like exit would request the services for terminating a
process.
• Loading of programs or changing of the original image with duplicate needs execution of exec.
Pid would help to distinguish between child and parent processes.

Process management system calls in Linux:


fork − For creating a duplicate process from the parent process.

wait − Processes are supposed to wait for other processes to complete their work.

exec − Loads the selected program into the memory.

exit − Terminates the process.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy