0% found this document useful (0 votes)
26 views35 pages

OS-Unit II Notes

This document covers key concepts in process management within operating systems, including the definitions and states of processes, process scheduling, and inter-process communication. It explains how processes are created, managed, and terminated, as well as the importance of process control blocks and context switching. Additionally, it discusses the types of schedulers and the mechanisms for inter-process communication, such as message passing and shared memory.

Uploaded by

nikexuses
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views35 pages

OS-Unit II Notes

This document covers key concepts in process management within operating systems, including the definitions and states of processes, process scheduling, and inter-process communication. It explains how processes are created, managed, and terminated, as well as the importance of process control blocks and context switching. Additionally, it discusses the types of schedulers and the mechanisms for inter-process communication, such as message passing and shared memory.

Uploaded by

nikexuses
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

OPERATING SYSTEM

UNIT 2 - PROCESS MANAGEMENT

TOPICS

2.10 PROCESS CONCEPT

2.11 PROCESS SCHEDULING

2.12 OPERATIONS ON PROCESSES

2.13 INTER-PROCESS COMMUNICATION

2.14 PROCESS SCHEDULING: BASIC CONCEPTS

2.15 SCHEDULING CRITERIA

2.16 SCHEDULING ALGORITHMS

2.17 MULTIPLE-PROCESSOR SCHEDULING


2.10 PROCESS CONCEPTS

Program & Process:


A program is a static entity which is a set of instructions. A program exists in a single space. A
program does not execute by itself. On a Single-user system such as Microsoft Windows, a user
may be able to run several programs at one time; a word processor, a web browser and an e-mail.
Process is a dynamic entity. A process is an execution of sequence of instructions. And a process
exists for a limited span of time. Process is a program under execution. Two or more process may
execute the same program by using its own data & resources. A system consists of collection of
process.
i) Operating system processes executing system code and
ii) user processes executing user code.

Program Process

Program is a passive entity such as file containing Process is an active entity with a program counter
list of instructions stored on disk specifying the next instruction to execute

Has a longer lifetime Has a shorter lifetime

Hard disk stores the programs and these programs Requires resources such as memory, IO devices
doesn’t require resources and CPU

Process in Memory

A process generally consists of a process stack which consists of code, data, heap and
stack sections
 The compiled program code is stored in text section.

 The global and static variables are stored in data section

 The dynamic memory allocations are managed using heap section

 The local variables and function return values are stored in stack section

Stack and heap start at the opposite ends and grow towards each other

Figure 1: Process in memory


Process State:

Figure 2: Process states

The Process changes its state multiple times throughout its life cycle.
New: It is the initial state of a process. This is the state when the process has just been created.
Ready: In the ready state, the process is waiting to be assigned the processor by the short-term
scheduler, so that it can run. This state is immediately after the new state for the process.
Running: The process is said to be in running state when the process instructions are being
executed by the processor. This is done once the process is assigned to the processor using the
short-term scheduler.
Waiting: The process is in blocked state if it is waiting for some event to occur. This event may
be I/O as the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to ready state.
Terminated: The process is terminated once it finishes its execution. In the terminated state, the
process is removed from main memory and its process control block is also deleted.

Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to
run, only in the case of 'Ready' state, there is temporarily no CPU available for it. The ‘Waiting’
state is different from the 'Running' and 'Ready' states in that the process cannot run, even if the
CPU is available.

Process Control Block:


A process control block (PCB) is a data structure used by computer operating systems to store all
the information about a process. Process Control Block (PCB) is a task control block.
PCB contains many pieces of information associated with a specific process.
The following are the data items
Process State - This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number-This shows the number of the particular process.
Program Counter-This contains the address of the next instruction that needs to be executed in
the process.
Registers-This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files-These are the different files that are associated with the process
CPU Scheduling Information-The process priority, pointers to scheduling queues etc. is the
CPU scheduling information that is contained in the PCB. This may also include any other
scheduling parameters.
Memory Management Information-The memory management information includes the page
tables or the segment tables depending on the memory system used. It also contains the value of
the base registers, limit registers etc.
I/O Status Information-This information includes the list of I/O devices used by the process, the
list of files etc.
Accounting information-The time limits, account numbers, amount of CPU used, process
numbers etc. are all a part of the PCB accounting information.

Figure 3: CPU switch from one process to another


Figure 4: Fields in PCB

Context Switch:
When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process.
Context-switch time is overhead; the system does no useful work while switching.
Time dependent on hardware support
The context of a process is represented in the PCB of the process;
it includes
 value of CPU registers
 process-state and
 memory-management information.
Disadvantages:
1) Context-switch time is pure overhead, because the system does no useful work while
switching.
2) Context-switch times are highly dependent on hardware support.

2.11 PROCESS SCHEDULING

Process scheduler selects the available process for program execution on CPU.
The process state consists of everything necessary to resume the process execution if it is
somehow put aside temporarily.

Figure 5: Process Scheduling queues

The following are different types of process scheduling queues.


Job Queue: As process enters, the system, they are put in Job Queue, which consists of all
processes in the system
Ready Queue: The process that are placed in main memory and are already and waiting to
executes are placed in a list called the ready queue. This is in the form of linked list. Ready queue
header contains pointer to the first & final PCB in the list. Each PCB contains a pointer field that
points next PCB in ready queue.
Device Queue: The list of processes waiting for a particular I/O device is called device. When
the CPU is allocated to a process it may execute for some time & may quit or interrupted or wait
for the occurrence of a particular event like completion of an I/O request but the I/O may be busy
with some other processes. In this case the process must wait for I/O. This will be placed in
device queue. Each device will have its own queue.
The process scheduling is represented using a queuing diagram. Queues are represented by the
rectangular box & resources they need are represented by circles. It contains two queues ready
queue & device queues.
Once the process is assigned to CPU and is executing the following events can occur,
 It can execute an I/O request and is placed in I/O queue.
 The process can create a sub process & wait for its termination.
 The process may be removed from the CPU as a result of interrupt and can be put back
into ready queue.

Figure 6: Queuing Diagram representation of Process scheduling

Schedulers:
The following are the different type of schedulers
1. Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue.
Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)
The long-term scheduler controls the degree of multiprogramming
2. Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU.
Short-term scheduler is invoked very frequently (milliseconds) (must be fast)
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Why long-term scheduler should select a good process mix of I/O-bound and CPU-bound
processes ?
i. If all processes are I/O bound, then
 Ready-queue will almost always be empty, and
 Short-term scheduler will have little to do.
ii. If all processes are CPU bound, then
 I/O waiting queue will almost always be empty (devices will go unused) and
 System will be unbalanced.
3. Medium-term schedulers-
Some time-sharing systems have medium-term scheduler
 The scheduler removes processes from memory and thus reduces the degree of
multiprogramming.
 Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.

 The process is swapped out, and is later swapped in, by the scheduler

Figure 7: Addition of medium-term scheduling to the queueing diagram

Long-Term Scheduler Short-Term Scheduler


Also called job scheduler. Also called CPU scheduler.
Selects which processes should be brought into the Selects which process should be executed next
ready-queue. and allocates CPU.
Need to be invoked only when a process Need to be invoked to select a new process for
leaves the system and therefore executes much the CPU and therefore executes much more
less frequently. frequently.
May be slow, in minutes, may separate the
creation of one new process and the next. Controls May be fast ,a process may execute in few
the degree of multiprogramming. milliseconds.

2.12 OPERATION ON PROCESS


A Process is a program under execution, so a process can be created and deleted

dynamically.

Process Operations
1) Process Creation and
2) Process Termination

Process Creation
A process may create a new process via a create-process system-call.
The creating process is called a parent-process.
The new process created by the parent is called the child-process (Sub-process).
OS identifies processes by pid (process identifier), which is typically an integer-number.
A process needs following resources to accomplish the task:
 CPU time
 memory and
 I/O devices.

Child-process may
 get resources directly from the OS or
 get resources of parent-process. This prevents any process from overloading the system.

Two options exist when a process creates a new process:

 The parent & the children execute concurrently.- fork()


 The parent waits until all the children have terminated. – wait()

Two options exist in terms of the address-space of the new process:

 The child-process is a duplicate of the parent-process (it has the same program and data
as the parent). -fork()
 The child-process has a new program loaded into it. -exec()

In UNIX, each process is identified by its process identifier (pid), which is a unique integer.
Getpid() – retrieves the process id for that process

Fork()-
• A new process is created by the fork() system-call
• The new process consists of a copy of the address-space of the original process.
• After fork() ,both parent and the child, have the same memory image, the same
environment strings and the same open files

• Both the parent and the child continue execution with one difference:

o The return value for the fork() is zero for the new (child) process.
o The return value for the fork() is nonzero pid of the child for the parent-process.
o The return value for the fork() is non-negative, if child process is unsuccessful
Exec()-
 Exec() system-call is used after a fork() system-call by one of the two processes to
replace the process's memory-space with a new program.
 The parent can issue wait() system-call to move itself off the ready-queue.
 Exec() – Replaces the entire current process with a new program. It loads the program
into current process space and run it from entry point.

Figure 8:Creating a separate process using the UNIX fork() system-call

Process Termination
A process terminates when it finishes executing its last statement and informs OS to delete it by
using exit() system call. At that point it will return a value to parent process through wait()
system call.

Process termination is required when


 Child has exceeded allocated resources
 Task assigned to child is no longer required

Its resources are returned to the system, it is purged from any system lists or tables, and its
process control block (PCB) is erased i.e., the PCB's memory space is returned to a free memory
pool. The new process terminates the existing process, usually due to following reasons:

Normal Exist Most processes terminate because they have done their job. This call is existing in
UNIX.

Error Exist When process discovers a fatal error. For example, a user tries to compile a program
that does not exist.

Fatal Error An error caused by process due to a bug in program for example, executing an
illegal instruction, referring non-existing memory or dividing by zero.
Killed by another Process A process executes a system call telling the Operating Systems to
terminate some other process. In UNIX, this call is kill.

In some systems when a process kills all processes it created are killed as well (UNIX does not
work this way).

Exit() Wait() Abort()


Process executes Output data from Parent may
last statement and child to parent is terminate
asks the operating send execution of
system to delete it children processes

Some OS doesn’t allow a child to exist, if its parent has terminated, so it has to kill all its
children, this phenomenon is called cascading termination. In UNIX, if parent terminates, all its
children will have a new parent i.e., init process. So that children still have a parent to send the
status.

System calls for process creation and termination

UNIX system call Win32 System call


Fork() Createprocess()
Exit() Terminateprocess()
Wait() Waitforsingleobject()
Getpid() getcurrentprocessId()

2.13 INTER-PROCESS COMMUNICATION


Cooperating Processes & Independent Processes
Processes executing concurrently in the OS may be
1. Independent processes or
2. Co-operating processes.
A process is independent if
 The process cannot affect or be affected by the other processes.
 The process does not share data with other processes.
A process is co-operating if
 The process can affect or be affected by the other processes.
 The process shares data with other processes.
Advantages of process co-operation:
1)Information Sharing
Since many users may be interested in same piece of information (ex: shared file).
2) Computation Speedup
 Divide the task into subtasks.
 Each subtask should be executed in parallel with the other subtasks.
 The speed can be improved only if computer has multiple processing elements such
as CPUs or I/O channels.
3) Modularity
Divide the system-functions into separate processes or threads.
4) Convenience
An individual user may work on many tasks at the same time.
For ex, a user may be editing, printing, and compiling in parallel.
Cooperating process requires Inter Process Communication (IPC) that allows messages to
exchange data and information
INTERPROCESS COMMUNICATION (IPC)
 Mechanism for processes to communicate and to synchronize their actions.
 IPC is carried out using Message Passing and Shared memory

Message Passing
Figure 9 a. Message Passing b. Shared Memory
Message Passing provides a mechanism to communicate and synchronize their actions
without sharing the same address space
IPC facility provides two operations:
 send(message)
 receive(message)
Message size can be fixed or variable size.
If P and Q wish to communicate, they need to exchange messages using send/receive via
communication link. This Communication link can be implemented using
1. Direct or Indirect communication
2. Synchronous or Asynchronous
3. Automatic or Explicit buffering

1. Direct or Indirect Communication


Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
 Links are established automatically
 A link is associated with exactly one pair of communicating processes Between each pair
there exists exactly one link
 The link may be unidirectional, but is usually bi-directional
Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
 Link established only if processes share a common mailbox
 A link may be associated with many processes
 Each pair of processes may share several communication links
 Link may be unidirectional or bi-directional
Operations
 create a new mailbox
 send and receive messages through mailbox
 destroy a mailbox

Primitives are defined as:


send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Mailbox sharing
If P1, P2, and P3 share mailbox A .P1 sends; P2 and P3 receive Who gets the message?
Solutions:
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive operation Allow the system to
select
arbitrarily the receiver. Sender is notified who the receiver was.

Direct Communication Indirect Communication

Each process must explicitly name the Messages are sent to/received from
recipient/sender. mailboxes (or ports).

Properties of a communication link: Properties of a communication link:

A link is established automatically between A link is established between a pair of
every pair of processes that want to processes only if both members have a
communicate. The processes need to know shared mailbox.
only each other‟s identity to communicate. A link may be associated with more than
A link is associated with exactly two two processes.
processes. A number of different links may exist
Exactly one link exists between each pair between each pair of communicating
of processes. processes.

Symmetric addressing: Mailbox owned by a process:


Both sender and receiver processes must The owner can only receive, and the user
name the other to communicate. can only send.

The mailbox disappears when its owner


process terminates.

Asymmetric addressing: Mailbox owned by the OS:


Only the sender names the recipient; the The OS allows a process to:
recipient needn't name the sender. 1. Create a new mailbox
2. Send & receive messages via it
3. Delete a mailbox.

2. Synchoronous or Asynchronous Communication


Message passing may be either blocking or non-blocking

Blocking is considered synchronous


Blocking send has the sender block until the message is received.
Blocking receive has the receiver block until a message is available.

Non-blocking is considered asynchronous


Non-blocking send has the sender send the message and continue.
Non-blocking receive has the receiver receive a valid message or null.

Synchronous Message Passing Asynchronous Message Passing

Blocking send: Non-blocking send:

The sending process is blocked until the The sending process sends the message
message is received by the receiving process and resumes operation.
or by the mailbox.

Blocking receive: Non-blocking receive:

The receiver blocks until a message is The receiver retrieves either a valid
available. message or a null.

3. Automatic or Explicit buffering


Queue of messages attached to the link; implemented in one of three ways
 Zero capacity – 0 messages sender must wait for receiver (rendezvous)
 Bounded capacity – finite length of n messages Sender must wait if link full
 Unbounded capacity – infinite length sender never waits

Shared Memory
 Every process executes in its own address space & OS tries to prevent one process from
other process memory
 Shared memory requires two or more processes agree to remove this restriction
 Exchange information by reading & writing data in the shared areas
 OS doesn’t control the data & location determined but the processes are responsible for it
 Process are responsible that they don’t write to the same location simultaneously
Eg for Producer-Consumer problem:
 Producer process produces the information
 Consumer process consumes the information
 Eg: Compiler produces assembly code, which is consumed by assembler

Two types of buffer:


Unbounded Buffer
 No limit on the size of the buffer
 Consumer may have to wait for new item but the producer can always produce new item
Bounded Buffer
 Assumes Fixed buffer size
 Consumer must wait if buffer is empty and Producer must wait if the buffer is full
Code for Producer-consumer Problem

#define Buffer_size 10
typedef struct {
---------
}item;
item buffer[Buffersize];
int in =0;
int out=0;
// in- points to next free positionin buffer
//out- first full position in buffer

//Producer Code:
item nextProduced;

while( true ) {
/* Produce an item and store it in nextProduced */
nextProduced = makeNewItem( . . . );
/* Wait for space to become available */
while( ( ( in + 1 ) % BUFFER_SIZE ) == out )
; /* Do nothing */
/* And then store the item and repeat the loop. */
buffer[ in ] = nextProduced;
in = ( in + 1 ) % BUFFER_SIZE;
}

//Consumer code:
item nextConsumed;
while( true ) {
/* Wait for an item to become available */
while( in == out )
; /* Do nothing */
/* Get the next available item */
nextConsumed = buffer[ out ];
out = ( out + 1 ) % BUFFER_SIZE;
/* Consume the item in nextConsumed
( Do something with it ) */
}

2.14 PROCESS SCHEDULING: BASIC CONCEPTS


Basic Concepts
1) In a single-processor system,
only one process may run at a time.
other processes must wait until the CPU is rescheduled.
2) Objective of multiprogramming:
to have some process running at all times, in order to maximize CPU utilization.

CPU-I/0 Burst Cycle


• Process execution consists of a cycle of
→ CPU execution and
→ I/O wait
• Process execution begins with a CPU burst, followed by an I/O burst, then another CPU
burst, etc…
• Finally, a CPU burst ends with a request to terminate execution.
• An I/O-bound program typically has many short CPU bursts.

• A CPU-bound program might have a few long CPU bursts.

Figure 10Alternating sequence of CPU


and I/O bursts

CPU Scheduler
 This scheduler selects a waiting-process from the ready-queue and allocates CPU to the
waiting-process.
 The ready-queue could be a FIFO, priority queue, tree and list.
 The records in the queues are generally process control blocks (PCBs) of the processes.

CPU Scheduling

Four situations under which CPU scheduling decisions take place:

1. When a process switches from the running state to the waiting state. For ex; I/O request.
2. When a process switches from the running state to the ready state. For ex: when an
interrupt occurs.
3. When a process switches from the waiting state to the ready state. For ex: completion of
I/O.
4. When a process terminates.
Scheduling under 1 and 4 is non-preemptive.
All other scheduling is preemptive

Once the CPU has been allocated to a process, the process keeps the CPU until it releases the
CPU either
 by terminating or
 by switching to the waiting state.
This is driven by the idea of prioritized computation.
Processes that are runnable may be temporarily suspended
Disadvantages:
 Incurs a cost associated with access to shared-data.
 Affects the design of the OS kernel.

Dispatcher
It gives control of the CPU to the process selected by the short-term scheduler.
The function involves:
 Switching context
 Switching to user mode &
 Jumping to the proper location in the user program to restart that program.
 It should be as fast as possible, since it is invoked during every process switch.
Dispatch latency means the time taken by the dispatcher to stop one process and start another
running.

1.18 SCHEDULING CRITERIA


1.CPU utilization – keep the CPU as busy as possible
2.Throughput – Number of processes that complete their execution per time unit
3.Turnaround time – amount of time to execute a particular process
4.Waiting time – amount of time a process has been waiting in the ready queue
5.Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
General Goals
Fairness

Fairness is important under all circumstances. A scheduler makes sure that each process
gets its fair share of the CPU and no process can suffer indefinite postponement. Note that
giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.
Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local
policy is safety then the safety control processes must be able to run whenever they want to,
even if it means delay in payroll processes.
Efficiency

Scheduler should keep the system (or in particular CPU) busy cent percent of the time
when possible. If the CPU and all the Input/Output devices can be kept running all the time,
more work gets done per second than if some components are idle.

Response Time

A scheduler should minimize the response time for interactive user.


Turnaround

A scheduler should minimize the time batch users must wait for an output.
Throughput

A scheduler should maximize the number of jobs processed per unit time.

A little thought will show that some of these goals are contradictory. It can be shown that
any scheduling algorithm that favours some class of jobs hurts another class of jobs. The
amount of CPU time available is finite, after all.

Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how they deal
with clock interrupts.

Nonpreemptive Scheduling
A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the
CPU cannot be taken away from that process.

Following are some characteristics of nonpreemptive scheduling

 In nonpreemptive system, short jobs are made to wait by longer jobs but the
overall treatment of all processes is fair.
 In nonpreemptive system, response times are more predictable because incoming high
priority jobs cannot displace waiting jobs.
 In nonpreemptive scheduling, a schedular executes jobs in the following two situations.
o When a process switches from running state to the waiting state.
o When a process terminates.

Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can take
away.
The strategy of allowing processes that are logically runnable to be temporarily
suspended is called Preemptive Scheduling and it is contrast to the "run to completion"
method.

2.19 SCHEDULING ALGORITHMS

CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. Following are some scheduling algorithms we will study.

1. FCFS Scheduling.
2. Shortest job first
3. Priority Scheduling
4. Round Robin Scheduling.
5. Multilevel Queue Scheduling.
6. Multilevel Feedback Queue Scheduling.
Scheduling
Algorithms

Non-
Preemptive
Preemeptive

First Come Shortest Priority Based


Shortest Job Priority Based Round Robin
First Serve Remaining Job
First (SJF) Scheduling Scheduling Scheduling
(FCFS) First (SRTF)

First-Come-First-Served (FCFS) Scheduling

Other names of this algorithm are:

 First-In-First-Out (FIFO)
 Run-to-Completion
 Run-Until-Done

First-Come-First-Served algorithm is the simplest scheduling algorithm. Processes are


dispatched according to their arrival time on the ready queue. Being a non-preemptive discipline,
once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense
or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and
unimportant jobs make important jobs wait.
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not
useful in scheduling interactive users because it cannot guarantee good response time. The code
for FCFS scheduling is simple to write and understand. One of the major drawback of this
scheme is that the average time is often quite long. The First- Come-First-Served algorithm is
rarely used as a master scheme in modern operating systems but it is often embedded within
other schemes.
Figure 11: FCFS example 1

Figure 12: FCFS Example 2

CONVOY EFFECT:
Processes that need shorter CPU time are blocked by one process holding the CPU for long time.
This leads to Poor utilization of resource and poor performance

Shortest-Job-First (SJF) Scheduling


The CPU is assigned to the process that has the smallest next CPU burst.
If two processes have the same length CPU burst, FCFS scheduling is used to break the tie.
For long-term scheduling in a batch system, we can use the process time limit specified by the
user, as the length
SJF can't be implemented at the level of short-term scheduling, because there is no way to know
the length of the next CPU burst
Advantage:
The SJF is optimal, i.e. it gives the minimum average waiting time for a given set of processes.
Disadvantage:
Determining the length of the next CPU burst.
SJF algorithm may be either 1) non-preemptive or preemptive.
Non preemptive SJF
The current process is allowed to finish its CPU burst.
Example (for non-preemptive SJF): Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds.

Figure 13SHORTEST JOB FIRST (SJF)– Non Preemptive -Example 1


Figure 1416SHORTEST JOB FIRST (SJF)– Non Preemptive -Example 2

Preemptive SJF
If the new process has a shorter next CPU burst than what is left of the executing process, that
process is preempted. It is also known as SRTF scheduling (Shortest-Remaining-Time-First).
Example (for non-preemptive SJF): Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds.

Figure 15SHORTEST REMAINING JOB FIRST (SRJF)/SHORTEST REMAINING TIME NEXT–Preemptive Example 1
Figure 16SHORTEST REMAINING JOB FIRST (SRJF)/SHORTEST REMAINING TIME NEXT–Preemptive Example 2

Priority Based Scheduling


 A priority is associated with each process.
 The CPU is allocated to the process with the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 Priorities can be defined either internally or externally.

Internally-defined priorities

Use some measurable quantity to compute the priority of a process.

For example: time limits, memory requirements, no. of open files.

Externally-defined priorities

Set by criteria that are external to the OS

For example:

importance of the process

Categories

Priority scheduling can be either nonpreemptive or preemptive

Non Preemptive

The new process is put at the head of the ready-queue

Advantage:

Higher priority processes can be executed first.


Disadvantage:

Indefinite blocking, where low-priority processes are left waiting indefinitely for CPU. Solution: Aging
is a technique of increasing priority of processes that wait in system for a long time.

Example: Consider the following set of processes, assumed to have arrived at time 0, in the order P1,
P2, ..., P4, with the length of the CPU-burst time given in milliseconds.

Figure 17:PRIORITY BASED Non-preemptive (Example 1)

Figure 18:PRIORITY BASED Non-preemptive (Example 2)

Preemptive

The CPU is preempted if the priority of the newly arrived process is higher than the priority of the
currently running process
Figure 19PRIORITY BASED Preemptive (Example1)

Figure 20PRIORITY BASED Preemptive (Example2)

Round-Robin Scheduling
Designed especially for timesharing systems.It is similar to FCFS scheduling, but with preemption.
A small unit of time is called a time quantum (or time slice).

Time quantum is ranges from 10 to 100 ms.

The ready-queue is treated as a circular queue.


The CPU scheduler goes around the ready-queue and allocates the CPU to each process for a time
interval of up to 1 time quantum.

To implement:

 CPU scheduler Picks the first process from the ready-queue.


 Sets a timer to interrupt after 1 time quantum and
 Dispatches the process.
 One of two things will then happen.
o The process may have a CPU burst of less than 1 time quantum. In this case, the process
itself will release the CPU voluntarily.
o If the CPU burst of the currently running process is longer than 1 time quantum, the
timer will go off and will cause an interrupt to the OS.
o The process will be put at the tail of the ready-queue.

Advantage:

Higher average turnaround than SJF.

Disadvantage:

Better response time than SJF.

Figure 21: Round Robin -Example 1


Figure 22: Round Robin (Example 2)

Multilevel Queue Scheduling


A multilevel queue scheduling algorithm partitions the ready queue in several separate queues,
for instance, In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process,
such as Memory size , Process priority , Process type . Algorithm choose the process from the
occupied queue that has the highest priority, and run that process either Preemptive or Non-
preemptive. Each queue has its own scheduling algorithm or policy.
PossibilityI

If each queue has absolute priority over lower-priority unless the queue for the highest-priority
processes were process in the batch queue could run unless the queues interactive editing
processes will all empty.
Possibility II
If there is a time slice between the queues then each queue gets a certain amount of CPU times,
which it can then schedule among the processes in its queue. For instance;

 80% of the CPU time to foreground queue using RR.


 20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling
overhead, but it is inflexible.

Multilevel Feedback Queue Scheduling


A process may move between queues
The basic idea:
Separate processes according to the features of their CPU bursts. For example
1) If a process uses too much CPU time, it will be moved to a lower-priority queue.
This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
2) If a process waits too long in a lower-priority queue, it may be moved to a higher-priority
queue
This form of aging prevents starvation.
2.20 MULTIPLE-PROCESSOR SCHEDULING

If multiple CPUs are available, the scheduling problem becomes more complex.

Two approaches:
 Asymmetric Multiprocessing
The basic idea is:
A master server is a single processor responsible for all scheduling decisions, I/O processing
and other system activities.
The other processors execute only user code.
Advantage:
This is simple because only one processor accesses the system data structures, reducing the
need for data sharing.

 Symmetric Multiprocessing
The basic idea is:
Each processor is self-scheduling.
To do scheduling, the scheduler for each processor
Examines the ready-queue and Selects a process to execute.
Restriction: We must ensure that two processors do not choose the same process and that
processes are not lost from the queue.

Processor Affinity

In SMP systems, Migration of processes from one processor to another are avoided and
instead processes are kept running on same processor. This is known as processor affinity.
o Soft Affinity – When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called
soft affinity.
o Hard Affinity – Hard Affinity allows a process to specify a subset of processors on
which it may run. Some systems such as Linux implements soft affinity but also provide
some system calls like sched_setaffinity() that supports hard affinity.

Load Balancing

Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute. Load balancing
is unnecessary because once a processor becomes idle it immediately extracts a runnable
process from the common run queue. On SMP(symmetric multiprocessing), it is important to
keep the workload balanced among all processors to fully utilize the benefits of having more
than one processor else one or more processor will sit idle while other processors have high
workloads along with lists of processors awaiting the CPU.
There are two general approaches to load balancing :
1. Push Migration – In push migration a task routinely checks the load on each processor and
if it finds an imbalance then it evenly distributes load on each processors by moving the
processes from overloaded to idle or less busy processors.
2. Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task from a
busy processor for its execution.

Multi-Threaded Programming

Overview
A thread is a basic unit of CPU utilization; it comprises a
 thread ID,
 a program counter,
 a register set, and
 a stack.

 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
 A traditional (or heavyweight:) process has a single thread of control.
 If a process has multiple threads of control, it can perform more than one task at a
time.

Figure 4.1 illustrates the difference between a traditional single threaded process and a multi-
threaded process.
Multithreading models
 User threads are supported above the kernel and are managed without kernel support,

 Kernel threads are supported and managed directly by the operating system.
1. Many-to-One Model
 The many-to-one model maps many user-level threads to one kernel thread.
 Thread management is done by the thread library in user space,
 It is efficient; but the entire process will block if a thread makes a blocking system
call.
 one thread can access the kernel at a time,
 Multiple threads are unable to run in parallel on multiprocessors.

Ex: Green threads for Soaries


GNU portable threads

2. One-to-One Model
 The one-to-one model maps each user thread to a kernel thread.
 It provides more concurrency than the many-to-one model by allowing another
Thread to run when a thread makes a blocking system call;
 It also allows multiple threads to run in parallel on multiprocessors.
 The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread. Because the overhead of creating kernel threads can
burden the performance of an application,
EX: Linux, along with the family of Windows operating systems, implement the
one-to-one model.
3.Many-to-Many Model

 The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads.
 The number of kernel threads may be specific to either a particular application or a particular
machine
 Developers can create many user threads that are necessary and corresponding kernel threads can
run in parallel on a multiprocessor.
 when a thread performs a blocking system call, the kernel can schedule another thread for
execution.

Two level model also allows a user level thread to be bound to a kernel thread.
Thread Libraries
 A Thread Libraries provides the programmer with an API for creating and managing
threads.

 There are two primary ways of implement library.

1. To provide a library entirely in user space with no kernel support.


 All code and data structures for the library exist user space.
 invoking a function in the library results in a local function call in user space and not
a system call.

2. To implement a kernel-level library supported directly by the operating system.


 code and data structures for the library exist in kernel space.
 Invoking a function in the API for the library typically results in a system call to the kernel.

 Three main thread libraries are in use today:


(1) POSIX Pthread-- a user- or kernel-level library on POSIX
(2) Win32 -- kernel-level library available on Windows systems
(3) Java. Pthreads-- threads to be created and managed directly in Java programs,

Threading Issues
The fork() and exec() System Calls
 The fork () system call is used to create a separate, duplicate process.
 UNIX has 2 versions:
a) fork()- one that duplicates all threads.
b)fork()-duplicates only the thread that invoked the fork() system call.
Exec() if a thread invokes the exec() system call, the program specified in the parameter to exec () will
replace the entire process-including
all threads.
If exec() is called immediately after forking, then duplicating all threads is
unnecessary, as the program specified in the parameters to exec() will replace
the process.

Cancellation

The thread cancellation is the task of terminating a thread before it has completed.

For example, if multiple threads are concurrently searching through a database


and one thread
a Web page is loaded
using several threads-each image is loaded in a separate thread. When a
user presses the stop button on the browser, all threads loading the page are
canceled.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy