Os Unit-2 IPC

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

UNIT - II Process Concept, Multithreaded Programming, Process Scheduling,

Inter-process Communication

Inter process communication


UNIT – II:
Process Concept, Multithreaded Programming, Process Scheduling, Inter-process
Communication

Process Concept: Process scheduling, Operations on processes, Inter-process


communication, Communication in client server systems.

Multithreaded Programming: Multithreading models, Thread libraries, Threading issues,


Examples.

Process Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple


processor scheduling, Thread scheduling, Examples.

Inter-process Communication: Race conditions, Critical Regions, Mutual exclusion with


busy waiting, Sleep and wakeup, Semaphores, Mutexes, Monitors, Message passing,
Barriers, Classical IPC Problems - Dining philosophers problem, Readers and writers
problem.
Process Concepts-Introduction

• 1.The Structure of Process in Memory


• 2.Process State
• 3. Process Control Block
• 4.Threads
Process Concept-Introduction

• A Process is a program in execution


• A program is a passive entity but a process is an active entity.
• A program becomes a process when an executable file is loaded into memory, by two ways
• By double-clicking an icon representing the executable file and
• entering the name of the executable file on the command line
• A system consists of a collection of processes:
• OS processes executing system code and
• user processes executing user code.
• all these processed can execute concurrently, with the CPU
• by switching the CPU between processes,
• hence, the CPU utilization increased.
The Structure of Process in Memory

• Process is otherwise called as Job or Task


• A process is more than the program code, which
is sometimes known as the text section.
• The program counter and the contents of the
processor's registers.
• A process includes the process stack, which contains
temporary data(such as function parameters, return addresses and local variables
• a data section, which contains global variables.
• A process includes a heap, which is memory that us dynamically allocated
during process run time
Process State

• As a process executes, it changes state. The state of a process is defined in part by


the current activity of that process. A process may be in one of the following
states
• New. The process is being created.
• Ready. The process is waiting to be assigned to a processor.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Terminated. The process has finished execution
Process Control Block
• Each process is represented in the operating system by a process control
block (PCB)—also called a task control block
• Process state
• Program counter
• CPU registers
• CPU-Scheduling information
• Memory management information
• Accounting information
• I/O status information
• Threads
Process Control Block..,
• Process state. The state may be new, ready, running, and waiting, halted,
and so on.
• • Program counter. The counter indicates the address of the next
instruction to be executed for this process.
• • CPU registers. The registers vary in number and type, depending on the
computer architecture.
• They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information.
• Along with the program counter, this state information must be saved when
an interrupt occurs, to allow the process to be continued correctly
afterward.
Process Control Block..,
• CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such
items as the value of the base and limit registers and the page tables, or the
segment tables, depending on the memory system used by the operating
system.
•Accounting information. This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so
on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
Threads

• Thread is nothing but light weight process.


• When a process is running a word-processor program, a single thread of
instructions is being executed.
• The user cannot simultaneously type in characters and run the spell checker
within the same process, for example.
• A process to have multiple threads of execution and thus to perform more
than one task at a time.
• This feature is especially beneficial on multicore systems, where multiple
threads can run in parallel.
• On a system that supports threads, the PCB is expanded to include
information for each thread.
Process Scheduling
• 1. Scheduling Queues
• 2.Ready & Device Queue
• 3.Queueing Diagram
• 4.Schedulers
• short term
• long term and
• medium term schedulers
Process Scheduling
• The objective of multiprogramming is to have some
process running at all
• times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among
processes so frequently that users can interact with each program
while it is running.
• To meet these objectives, the process scheduler selects an
available process to execution on the CPU.
• If there are more processes, the rest will
• have to wait until the CPU is free and can be rescheduled.
• Ex: Running Java Programming, Media Player and Chrome to download eclipse at the same time
Scheduling Queues
• As processes enter the system, they are put into a job queue(process
queue), which consists of all processes in the system.
• Two types of queues
• 1. Ready Queue
• 2.Device Queue
• The processes that are residing in main memory and are ready and waiting
to execute are kept on a list called the ready queue.
• The list of processes waiting for a particular I/O device is called a device
queue.
• Each device has its own device queue(magnetic tape.disk, I/O devices etc.)
The ready queue and various I/O device
queues

• This queue is generally stored as a linked


list.
• A queue header contains pointers to the
first and final PCBs in the list.
• Each PCB includes a pointer field that points to the next PCB in the ready
queue
Queueing-diagram representation of process
scheduling
• Two types of queues are present:
• 1.the ready queue and
• 2. a set of device queues.
• Rectangular box represent a queue.

• The Circle represent the resources that serve the queues,


• The arrows indicate the flow of processes in the system.
Queueing-diagram representation of process
scheduling
• A new process is initially put in the ready queue.
• It waits there until it is selected for
execution, or dispatched.
• Once the process is allocated the CPU and
is executing, one of several events could
occur:
• issue an I/O request and then be placed in
an I/O queue.
• create a new child process and wait for the
child’s termination.
• be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
Schedulers
• A process migrates among the various scheduling queues throughout its
lifetime.
• The selection process is carried out by the appropriate scheduler
• 1.Short term scheduler,
• 2.Long term scheduler and
• 3. Medium term scheduler
• The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute and allocates the CPU to one of them
• The long-term scheduler, or job scheduler, selects processes from this pool
and loads them into memory for execution
Medium-term Scheduler
• medium-term scheduler to remove a process from memory to reduce
the degree of multiprogramming.
• Later, the process can be reintroduced into memory, and its execution
can be continued where it left off.
• This scheme is called swapping.
Operations on Processes

• 1.Process Creation
• 2.Process Termination
Inter process communication
• Race condition
• Critical regions
• Mutual exclusion with busy waiting
• Sleep and wakeup
• Semaphores
• Mutexes
• Monitors
• Message passing
• Barriers
• Classical IPC problems
• Dining philosophers problem
• Readers and writers problem
Critical-section Problem
• The critical-section problem is to design a protocol that
the processes can use to synchronize their activity so as
to cooperatively share data.
• Each process must request permission to enter its
critical section.
• The section of code implementing this request is the
entry section.
• The critical section may be followed by an exit section.
• The remaining code is the remainder section.
• At a given point in time, many kernel-mode processes may be active in the
operating system.
• As a result, the code implementing an operating system (kernel code) is subject to
several possible race conditions.
• Consider as an example a kernel data structure that maintains a list of all open
files in the system.
• This list must be modified when a new file is opened or closed (adding the file to
the list or removing it from the list).
• If two processes were to open files simultaneously, the separate updates to this list
could result in a race condition.
Race condition
• Race condition occurs when two or more processes (which are
executing concurrently) can access shared data and they try to change
it at the same time.
• Because the process scheduling algorithm can swap between process
at any time, you don't know the order in which the processes will
attempt to access the shared data.
• Therefore, the result of the change in data is dependent on the process
scheduling algorithm,
• i.e. both processes are "racing" to access/change the data.
Race Condition – Producer & Consumer Problem
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
• Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers.
• We can do so by having an integer count that keeps track of the
number of full buffers.
• Initially, count is set to 0.
• It is incremented by the producer after it produces a new buffer and
• It is decremented by the consumer after it consumes a buffer.
Producer
The producer produces the buffer until the size is full (reaches n)

while (true)
{
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
The consumer consumes the buffer until the size is 0 (reaches 0)
while (true)
{
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Race Condition
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Race Condition when assigning a PID
• Processes P0 and P1 simultaneously calls fork(), the next available pid
is 2615 assigned to both children. (that should not be happened)
Solution to Critical-Section Problem
Requirements:
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely.
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
Peterson’s Solution
• Two process solution
• Assume that the LOAD and STORE instructions are atomic; that is,
cannot be interrupted.
• The two processes share two variables:
• int turn;
• Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical section.

• The flag array is used to indicate if a process is ready to enter the


critical section.
• flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi

do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization Hardware
• Many systems provide hardware support for critical section code
• Uniprocessors – could disable interrupts
• Currently running code would execute without preemption
• Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic hardware instructions
• Atomic = non-interruptible
• Either test memory word and set value
• Or swap contents of two memory words

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy