UNIT III (2)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 92

UNIT III

Process Management
Process Concept

 Informally, a process is a program in execution.

 A process is more than the program code, which is sometimes known as


the text section.

 It also includes the current activity, as represented by the value of the


program counter and the contents of the processor's registers.
 In addition, a process generally includes the process stack, which contains
temporary data (such as method parameters, return addresses, and local
variables), and a data section, which contains global variables.

• An operating system executes a variety of programs:

✦ Batch system – jobs

✦ Time-shared systems – user programs or tasks

• Process – a program in execution; process execution must progress in


sequential fashion.

• A process includes: program counter , stack, data section


Process Control Block
Process Control Block (PCB)
Information associated with each process.
• Process state

• Program counter

• CPU registers

• CPU scheduling information

• Memory-management information

• Accounting information

• I/O status information


Process state: The state may be new, ready, running, waiting, halted, and so on.

Program counter: The counter indicates the address of the next instruction to be
executed for this process.

CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward.

CPU-scheduling information: This information includes a process priority,


pointers to scheduling queues, and any other scheduling parameters.
Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating
system.

Accounting information: This information includes the amount of CPU and


real time used, time limits, account numbers, job or process numbers, and so
on.

I/O status information: The information includes the list of I/O devices
allocated to this process, a list of open files, and so on.

The PCB simply serves as the repository for any information that may vary
from process to process.
Process State
As a process executes, it changes state

 New State: The process is being created.

 Running State: A process is said to be running if it has the CPU, that is, process
actually using the CPU at that particular instant.

 Blocked (or waiting) State: A process is said to be blocked if it is waiting for


some event to happen such that as an I/O completion before it can proceed. Note
that a process is unable to run until some external event happens.

 Ready State: A process is said to be ready if it needs a CPU to execute. A ready


state process is runnable but temporarily stopped running to let another process
run.

 Terminated state: The process has finished execution.


Threads Overview

Overview

Multithreading Models

Threading Issues
What is Thread?

 A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.

 A thread shares with its peer threads few information like code segment, data segment
and open files.

 A thread is also called a lightweight process.

 Threads provide a way to improve application performance through parallelism.

 Threads represent a software approach to improving performance of operating system by


reducing the overhead thread is equivalent to a classical process
S.No. Process Thread
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a
process.

2 Process switching needs interaction with operating Thread switching does not need to interact with operating
system. system.

3 In multiple processing environments, each process All threads can share same set of open files, child processes.
executes the same code but has its own memory and file
resources.

4 If one process is blocked, then no other process can While one thread is blocked and waiting, a second thread in
execute until the first process is unblocked. the same task can run.

5 Multiple processes without using threads use more Multiple threaded processes use fewer resources.
resources.

6 In multiple processes each process operates One thread can read, write or change another thread's data.
independently of the others.
Single and Multithreaded Processes
Benefits

 Responsiveness

 Resource Sharing

 Economy

 Utilization of MP Architectures
Benefits of Threads

Responsiveness
Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the use.

Resource Sharing
Since threads within the same process share memory and files, they can
communicate with each other without invoking the kernel
Benefits of Threads (cont.)

Economy

 Takes less time to create a new thread than a process


 Less time to terminate a thread than a process
 Less time to switch between two threads within the same process

Utilization of Multiprocessor Architectures

 Threads within the same process may be running in parallel on different


processors
User Thread and Kernel Thread
User Threads
 All of the work of thread management is done by the application.
 The kernel is not aware of the existence of threads
 An application can be programmed to be multi-threaded by using a threads
library, which is a package of routines for user thread management.
 The thread library contains code for creating and destroying threads, for
passing messages and data between threads, for scheduling thread
execution and for saving and restoring thread contexts.
 Three primary thread libraries:
o POSIX Pthreads
o Win32 threads
o Java threads
User Threads
Advantages:

 Thread switching does not require user/kernel mode switching.

 Thread scheduling can be application specific.

 User Threads can run on any OS through a thread library.

Disadvantages:

 When a ULT executes a system call, not only the thread is blocked, but all of the
threads within the process are blocked.

 Multithreaded application cannot take advantage of multiprocessing since kernel


assign one process to only one processor at a time.
Kernel Threads
 Supported and managed directly by the OS.

 W2K, Linux, and OS/2 are examples of this approach

 In a pure Kernel Thread facility, all of the work of thread management is


done by the kernel. There is no thread management code in the application
area, simply an application programming interface to the kernel thread
facility.
Kernel Threads
Advantages:

 Kernel can simultaneously schedule multiple threads from the same


process on multiple processors.

 If one thread in a process is blocked, kernel can schedule another


thread of the same process.

Disadvantage:

 More overhead
Multithreading Models
 Many-to-One

 One-to-One

 Many-to-Many
Many-to-One

• Many user-level threads


mapped to single kernel
thread
• Examples:
• Solaris Green Threads
• GNU Portable Threads
One-to-One
Each user-level thread maps to kernel thread
Examples
• Windows NT/XP/2000
• Linux
• Solaris 9 and later
Many-to-Many Model

 Allows many user level


threads to be mapped to
many kernel threads
 Allows the operating system
to create a sufficient number
of kernel threads
 Solaris prior to version 9
 Windows NT/2000 with the
ThreadFiber package
Two-level Model
 Similar to M:M, except that it
allows a user thread to be bound
to kernel thread
Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier
Cooperating Processes
 The concurrent processes executing in the operating system may be either
independent processes or cooperating processes.

 A process is independent if it cannot affect or be affected by the other


processes executing in the system. Clearly, any process that does not share
any data (temporary or persistent) with any other process is independent.

 On the other hand, a process is cooperating if it can affect or be affected by


the other processes executing in the system.

 Clearly, any process that shares data with other processes is a cooperating
process.
Inter-process Communication (IPC)

• Mechanism for processes to communicate and to synchronize their actions.


• Message system – processes communicate with each other without resorting
to shared variables.
• IPC facility provides two operations:
1. send(message) – message size fixed or variable
2. receive(message)
• If P and Q wish to communicate, they need to:
1.establish a communication link between them
2. exchange messages via send/receive
• Implementation of communication link
1.physical (e.g., shared memory, hardware bus)
2. logical (e.g., logical properties)
Direct Communication
Processes must name each other explicitly:
✦ send (P, message) – send a message to process P
✦ receive(Q, message) – receive a message from process Q Properties of
communication link
✦ Links are established automatically.
✦ A link is associated with exactly one pair of communicating processes.
✦ Between each pair there exists exactly one link.
✦ The link may be unidirectional, but is usually bi-directional.
Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports).

✦ Each mailbox has a unique id.


✦ Processes can communicate only if they share a mailbox.

Properties of communication link

✦ Link established only if processes share a common mailbox


✦ A link may be associated with many processes.
✦ Each pair of processes may share several communication links.
✦ Link may be unidirectional or bi-directional.
Operations

✦ create a new mailbox

✦ send and receive messages through mailbox

✦ destroy a mailbox

Primitives are defined as:

send(A, message) – send a message to mailbox A


receive(A, message) – receive a message from mailbox A
Mailbox sharing

✦ P1, P2, and P3 share mailbox A.


✦ P1, sends; P2 and P3 receive.
✦ Who gets the message?

Solutions

✦ Allow a link to be associated with at most two processes.


✦ Allow only one process at a time to execute a receive operation.
✦ Allow the system to select arbitrarily the receiver. Sender is notified who the
receiver was.
Principles of Concurrency
The principles of concurrency in operating systems are designed to ensure that
multiple processes or threads can execute efficiently and effectively, without
interfering with each other or causing deadlock.

•Interleaving − Interleaving refers to the interleaved execution of multiple


processes or threads. The operating system uses a scheduler to determine which
process or thread to execute at any given time. Interleaving allows for efficient
use of CPU resources and ensures that all processes or threads get a fair share of
CPU time.
•Synchronization − Synchronization refers to the coordination of multiple
processes or threads to ensure that they do not interfere with each other. This is
done through the use of synchronization primitives such as locks, semaphores,
and monitors. These primitives allow processes or threads to coordinate access to
shared resources such as memory and I/O devices.
•Mutual exclusion − Mutual exclusion refers to the principle of ensuring that
only one process or thread can access a shared resource at a time. This is
typically implemented using locks or semaphores to ensure that multiple
processes or threads do not access a shared resource simultaneously.

•Deadlock avoidance − Deadlock is a situation in which two or more processes


or threads are waiting for each other to release a resource, resulting in a deadlock.
Operating systems use various techniques such as resource allocation graphs and
deadlock prevention algorithms to avoid deadlock.

•Process or thread coordination − Processes or threads may need to


coordinate their activities to achieve a common goal. This is typically
achieved using synchronization primitives such as semaphores or
message passing mechanisms such as pipes or sockets.
•Resource allocation − Operating systems must allocate resources such as
memory, CPU time, and I/O devices to multiple processes or threads in a fair and
efficient manner. This is typically achieved using scheduling algorithms such as
round-robin, priority-based, or real-time scheduling.
Mutual exclusion
Mutual exclusion also known as Mutex is a unit of code that avert
contemporaneous access to shared resources. Mutual exclusion is concurrency
control’s property that is installed for the objective of averting race conditions.

Example there are n processes to be executed concurrently. Each process


includes (1) a critical section that operates on some resource Ra, and (2)
additional code preceding and following the critical section that does not involve
access to Ra. Because all processes access the same resource Ra, it is desired
that only one process at a time be in its critical section. To enforce mutual
exclusion, two functions are provided: entercritical and exitcritical . Each
function takes as an argument the name of the resource that is the subject of
competition. Any process that attempts to enter its critical section while another
process is in its critical section, for the same resource, is made to wait.
Types of message queues
The system has different types of message queues:

• Workstation message queue

• User profile message queue

• Job message queue

• System operator message queue

• History log message queue


Figures shows the message queues. A message queue is supplied for each display
station (where DSP01 and DSP02 are display station names) and each user
profile (where BOB and RAY are user profile names).

Job message queues are supplied for each job running on the system. Each job is
given an external message queue (*EXT) and each call of an original program
model (OPM) program or Integrated Language Environment (ILE) procedure
within the job has its own call message queue.
Message queues are also supplied for the system history log (QHST) and the
system operator (QSYSOPR).
These message queues are used as follows:

•Workstation message queues are used for sending and receiving messages
between workstation users and between workstation users and the system
operator. The name of the queue is the same as the name of the workstation. The
queue is created by the system when the workstation is described to the system.

•User profile message queues can be used for communication between users.
User profile message queues are automatically created in library QUSRSYS
when the user profile is created.
•Job message queues are used for receiving requests to be processed (such as
commands) and for sending messages that result from processing the requests;
the messages are sent to the requester of the job. Job message queues exist for
each job and only exist for the life of the job. Job message queues consist of an
external message queue (*EXT) and a set of call stack entry message queues.

•System operator message queue (QSYSOPR) is used for receiving and replying
to messages from the system, display station users, and application programs.

•The history log message queue is used for any job in the system to have a
record of high-level system activities.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy