Os Unit-2
Os Unit-2
Os Unit-2
PROCESS CONCEPTS
PROCESS:
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion. A process is more than the program code, which is sometimes known as the text section.
A process is an active entity, with a program counter specifying the next instruction to execute and a set
of associated resources. A program becomes a process when an executable file is loaded into memory.
As a process executes, it changes State. The state of a process is defined in part by the current activity of
that process. Each process may be in one of the following states:
Each process is represented in the operating system by a Process control block (PCB) - also called a
task control block. It contains many pieces of information associated with a specific process, including
these:
PCB keeps track of processes by storing information about various things like their state, i/o status and
CPU scheduling.
• Process state - The state may be new, ready, running, waiting, halted, and so on.
• Program counter - The counter indicates the address of the next instruction to be executed for this
process.
• CPU registers - The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-
code information.
• CPU-scheduling information - This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
THREADS
Thread is a single sequential flow of execution of tasks of a Process. It is often referred to as a
lightweight process.
There can be more than one thread inside a process where each thread of the same process makes use of
a separate program counter and a stack of activation records and control blocks.
Since threads use the same data and code, the operational cost between threads is low. Creating and
terminating a thread is faster compared to creating or terminating a process.
Context switching Processes require more time for Threads require less time for context
time context switching as they are more switching as they are lighter than
heavy. processes.
Memory Sharing Processes are totally independent and A thread may share some memory with
don’t share memory. its peer threads.
Blocked If a process gets blocked, remaining If a user level thread gets blocked, all of
processes can continue execution. its peer threads also get blocked.
Resource Processes require more resources than Threads generally need less resources
Consumption threads. than processes.
Dependency Individual processes are independent Threads are parts of a process and so are
of each other. dependent.
Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.
Treatment by OS All the different processes are treated All user level peer threads are treated as
separately by the operating system. a single task by the operating system.
Time for creation Processes require more time for Threads require less time for creation.
creation.
Time for Processes require more time for Threads require less time for termination
termination termination.
OPERATIONS ON PROCESSES
The processes in the system can execute concurrently, and they must be created and deleted
dynamically. Thus, the operating system must provide a mechanism (or facility) for process creation
and termination.
Process Creation:
A process may create several new processes, via a create-process system call, during the course of
execution. The creating process is called a parent process, whereas the new processes are called the
children of that process. Each of these new processes may in turn create other processes.
When a process creates a new process, two possibilities exist in terms of execution:
Process Termination:
A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit system call. At that point, the process may return data (output) to its parent
process (via the wait system call). All the resources of the process-including physical and virtual
memory, open files, and I/O buffers-are deallocated by the operating system.
Termination occurs under additional circumstances. A process can cause the termination of another
process via an appropriate system call (for example, abort). Usually, only the parent of the process that
is to be terminated can invoke such a system call. Otherwise, users could arbitrarily kill each other's
jobs. A parent therefore needs to know the identities of its children. Thus, when one process creates a
new process, the identity of the newly created process is passed to the parent.
COOPERATING PROCESSES
The concurrent processes executing in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other processes
executing in the system. Clearly, any process that does not share any data (temporary or persistent) with
any other process is independent.
We may want to provide an environment that allows process cooperation for several reasons:
Information sharing: Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to these types of
resources.
Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others. Such a speedup can be achieved only if the computer
has multiple processing elements (such as CPUS or I/O channels).
Modularity: We may want to construct the system in a modular fashion, dividing the system functions
into separate processes or threads.
Convenience: Even an individual user may have many tasks on which to work at one time. For instance,
a user may be editing, printing, and compiling in parallel.
IPC provides a mechanism to allow processes to communicate and to synchronize their actions without
sharing the same address space. IPC is particularly useful in a distributed environment where the
communicating processes may reside on different computers connected with a network.
1. Shared Memory
2. Message passing
An operating system can implement both methods of communication. An example is a chat program
used on the World Wide Web. IPC is best provided by a message-passing system, and message systems
can be defined in many ways.
PROCESS SCHEDULING
• The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
• The objective of time sharing is to switch the CPU among processes o frequently that users can
interact with each program while it is running.
• The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
• The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs.
• To meet these objectives, the Process Scheduler selects an available process for program
execution on the CPU.
Long term scheduler is also known as a Job scheduler. This scheduler regulates the program and
select process from the queue and loads them into memory for execution. It also regulates the degree of
multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O
jobs., that allows managing multiprogramming.
Medium-term scheduling is an important part of swapping. It enables you to handle the swapped
out-processes. In this scheduler, a running process can become suspended, which makes an I/O request
and can also resume the processes.
Short term scheduling is also known as CPU scheduler. This helps you to select from a group of
processes that are ready to execute and allocates CPU to one of them. The dispatcher gives control of the
CPU to the process selected by the short-term scheduler.
Scheduling fell into one of the two general categories:
PREEMPTIVE SCHEDULING: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state. (CPU Forcibly Interrupts the process)
Note: whenever the high priority process wants to execute, then the running low priority process can be
preempted and processor is assigned to that high priority process.
Note: Once the process is allotted with CPU, it keeps the CPU time until it completes its task or moves
to the waiting state.
Scheduling Queues:
• All processes, upon entering into the system, are stored in the Job Queue.
• Processes in the Ready state are placed in the Ready Queue.
• Processes waiting for a device to become available are placed in Device Queues. There are
unique device queues available for each I/O device.
SCHEDULING CRITERIA
CPU scheduling is the process of determining which process or task is to be executed by the central
processing unit (CPU) at any given time. It is an important component of modern operating systems that
allows multiple processes to run simultaneously on a single processor. The CPU scheduler determines
the order and priority in which processes are executed and allocates CPU time accordingly.
CPU Utilization:
The main objective of CPU scheduling algorithm is to keep the CPU busy. CPU utilization
varies from 49% to 90% depending on the load upon the system.
Throughput:
Throughput is the number of processes being executed and completed per unit time. Throughput
varies depending on the length of each process.
Turnaround time:
The time elapsed from the time of submission of a process to the time of completion is known as
the turnaround time. It is the total time spent by the system on waiting and executing.
Response time:
Response time is the time taken from the submission of a process request until the first response
is produced.
Waiting time:
Waiting time is the time spent by the process in the waiting queue.
Arrival Time: Time at which the process arrives in the ready queue
.
Completion Time: Time at which process completes its execution.
Turn Around Time: Time Difference between completion time and arrival time.
2. Process switches from running state to ready state (for example, when an interrupt occurs).
3. Process switches from waiting state to ready state (for example, at completion of I/O).
Scheduling Queues:
JOB QUEUE – As process enter the system, they are put into a job queue, which
consists of all processes in the system.
READY QUEUE – The processes that are residing in main memory and are ready
and waiting to execute are kept on a list known as ready queue.
SCHEDULING ALGORITHMS
Scheduling algorithms are used to solve the problem of deciding the set of the processes in the ready
queue that has to be allocated the CPU time.
Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready
for launch. The selection process is done by a temporary (CPU) scheduler. The Scheduler selects
between memory processes ready to launch and assigns the CPU to one of them.
• Suppose, if the CPU bursts of two processes are the same, then we use FCFS scheduling to
break the tie.
• SJF is used frequently in long term scheduling but it cannot be implemented in short-term
scheduling.
Priority Scheduling:
• Out of all the available processes, CPU is assigned to the process having the highest priority.
• In case of a tie, it is broken by FCFS Scheduling.
• Priority Scheduling can be used in both preemptive and non-preemptive mode
• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
• The main drawback of this scheduling algorithm is indefinite blocking or starvation (this
algorithm can leave the low priority processes waiting indefinitely).
• The time quantum or time slice is a small unit of time which is generally range from 10 to
100 millisecond.
• The process is executed within the time slice. After that the process is preempted and allows
the other process to execute for the given time slice.
• If time quantum is too large, its result is same as a FCFS scheduling. If time quantum is too
low, the processor through put is reduced because more time is spent on context switching.
There are two approaches to multiple processor scheduling in the operating system: Symmetric
Multiprocessing and Asymmetric Multiprocessing.
Symmetric Multiprocessing: It is used where each processor is self-scheduling. All processes may be
in a common ready queue, or each processor may have its private queue for ready processes.
Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O processing are
handled by a single processor called the Master Server. The other processors execute only the user
code.
Processor Affinity:
Processor Affinity means a process has an affinity for the processor on which it is currently running.
When a process runs on a specific processor, there are certain effects on the cache memory. The data
most recently accessed by the process populate the cache for the processor.
1. Soft Affinity: When an operating system has a policy of keeping a process running on the same
processor but not guaranteeing it will do so, this situation is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may
run.
Load Balancing:
Load Balancing is the phenomenon that keeps the workload evenly distributed across all processors in
an SMP system. Load balancing is necessary only on systems where each processor has its own private
queue of a process that is eligible to execute.
1. Push Migration: In push migration, a task routinely checks the load on each processor. If it
finds an imbalance, it evenly distributes the load on each processor by moving the processes
from overloaded to idle or less busy processors.
2. Pull Migration: Pull Migration occurs when an idle processor pulls a waiting task from a busy
processor for its execution.
System call offers the services of the operating system to the user programs via API (Application
Programming Interface). System calls are the only entry points for the kernel system.
System calls for Process management
• A system is used to create a new process or a duplicate process called a fork.
• The duplicate process consists of all data in the file description and registers common. The
original process is also called the parent process and the duplicate is called the child process.
• The fork call returns a value, which is zero in the child and equal to the child’s PID (Process
Identifier) in the parent. The system calls like exit would request the services for terminating a
process.
• Loading of programs or changing of the original image with duplicate needs execution of exec.
Pid would help to distinguish between child and parent processes.
wait − Processes are supposed to wait for other processes to complete their work.