0% found this document useful (0 votes)
35 views59 pages

Os - PPT 4

A process can have multiple execution threads. A thread is the unit of execution within a process and allows for parallel execution. The document discusses the differences between processes and threads, states of processes, and context switching between processes.

Uploaded by

goodgameplay64
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views59 pages

Os - PPT 4

A process can have multiple execution threads. A thread is the unit of execution within a process and allows for parallel execution. The document discusses the differences between processes and threads, states of processes, and context switching between processes.

Uploaded by

goodgameplay64
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

CSE Sec-C

4
 Process means any program is in execution.
 A Thread is the unit of execution within a process. A single
process have many threads.
 Thread is the segment of a process means a process can
have multiple threads and these multiple threads are
contained within a process.

Example: Opening a new browser (say


Chrome, etc) is an example of creating a
Process opening multiple tabs in the
browser is an example of creating the
thread.

2
 Process means any program  Thread means segment of a
is in execution. process.
 Process takes more time to  Thread takes less time to
terminate. terminate.
 It takes more time for  It takes less time for
creation. creation.
 Thread switching does not
 Process switching needs
interaction with operating interact with operating
system.
system.

Process Thread

3
 A state of a process is defined in part by the current activity of that
process.
 As the process executes it changes its state.
 A process passes from number of states during its life, states are as
follows-
1) New state
2) Ready state
3) Running state
4) Wait state
5) Termination state

4
 First state is known as new state.
 In New state the process is being created.

5
 After creation process comes under ready state.
 For example: One process is created at the same
time second process is created then both the
process will come under ready state.

6
 When CPU is allotted to process in ready state
that process comes in running state.
 In running state only one process can stay at a
time. Because CPU can be allotted to single
process at a time.

7
 When a process request for input/output than
that process will left the running state, and will
join new state known as wait state.

8
 From ready state the process will go to again running state.
 The process has finished execution is known as termination
state.
 When process is in running state three case will arise:
 1st case: If successfully executed then moves to
terminated.
 2nd case: when interrupt occurs then it moves to ready
state.
 3rd case: when a process requires I/O operation then the
process moves to waiting state.
 Process control block (PCB) is a data structure maintained by
OS for every process.
 This specifies the process state i.e. new, ready, running,
waiting or terminated.

2. Process ID (PID)

 It is unique ID of each process. Every process identified by


unique ID.
 The lines of instruction( code) in a process executed line by
line. What is next line that to be executed that information
given by PC.

4. CPU Registers

 Whenever an interrupt occurs and there is a context switch


between the processes, the temporary information is stored in
the registers
 This component includes a process priority information.
 Many processes are waiting for executing, scheduling
determine which process are executing first.

6. Memory-management information:

 Memory that being used by the particular process. Information


of page table, memory limit.
 It kept the record of all information that being used by
particular process.
 Amount of CPU used, time of execution, time limit, execution
ID etc.

8. I/O status information:

 List of I/O devices that are allocated to the


particular process.
 The OS maintains all Processes in Process Scheduling Queues.
 The Operating System maintains the following important
process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes
residing in main memory, ready and waiting to execute.
 The scheduling activity is carried out by a process
called scheduler.
 Schedulers are of three types −
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
 It is also called a job scheduler.
 It selects processes from the queue and loads them into
memory for execution.

Short Term Scheduler

 It is also called as CPU scheduler.


 CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
 Medium-term scheduling is a part of swapping.
 A running process may become suspended if it makes an I/O
request.
 A suspended processes cannot make any progress towards
completion.
 In this condition, to remove the process from memory and
make space for other processes, the suspended process is
moved to the secondary storage.
 A Thread is the unit of execution within a process. A single process
have many threads.
 The process can be split down into so many threads. For example,
in a browser, many tabs can be viewed as threads.
 Any thread has the following components.
1. Program counter
2. Register set ( A register may hold an instruction, a storage
address, or any kind of data)
3. Stack space (is a block of memory used to store temporary
data needed for proper program execution)
1. Responsiveness: If the process is divided into multiple
threads, if one thread completes its execution, then its output
can be immediately returned.
2. Resource sharing: Resources like code, data, and files can
be shared among all threads within a process.
3. Effective utilization of multiprocessor system: If we have
multiple threads in a single process, then we can schedule
multiple threads on multiple processor. This will make process
execution faster.
 There are two types of threads.
1. User Level Threads − User managed threads.
2. Kernel Level Threads − Operating System managed threads
acting on kernel
 User threads can be easily implemented and it is implemented by
the user.
 Example : Java thread
 In this case, the thread management kernel is not aware of the
existence of threads.
 The thread library contains code for creating and destroying
threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring
thread contexts.
Advantages
 Thread switching does not require Kernel mode privileges.

 User level thread can run on any operating system.


 User level threads are fast to create and manage.

Disadvantages
 User-level threads lack coordination between the thread and
the kernel.
 If one thread in a process is blocked, then the total process
blocked.
 In this case, thread management is done by the Kernel.
 The kernel-level thread is implemented by the operating
system.
 The Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the
same process.
 The kernel-level thread is fully aware of all threads.

 If one thread in a process is blocked, the Kernel can schedule


another thread of the same process.
Disadvantages
 Kernel threads are generally slower to create and manage than
the user threads.
 Transfer of control from one thread to another within the same
process requires a mode switch to the Kernel.
 There must exist a relationship between User threads &
Kernel threads.
 Some operating system provide a combined user level
thread and Kernel level thread facility.
 Solaris is a good example of this combined approach.
 There are three common ways of establishing this
relationship.
1. Many to one relationship
2. One to one relationship
3. Many to many relationship
 Many-to-one model maps many
user level threads to one Kernel-
level thread.
 Thread management is done by
the thread library in user space,
so it is efficient.
 Only one thread can access the
Kernel at a time.
Disadvantages
 The entire process will block if
thread makes a blocking system
call.
(Blocking means waiting for
some event, such as completion
of an I/O operation)
 There is one-to-one relationship , one user thread exactly
mapped with one kernel thread.
 In this case if one user thread blocks then entire process will
not blocked.
Disadvantages:
 Creating a user thread requires creating the corresponding
kernel thread. windows NT and windows 2000 use one to one
relationship model.
 The many-to-many model
multiplexes any number of
user threads onto an equal or
smaller number of kernel
threads.
 The no. of kernel threads
may be specific to either a
particular application or
particular machine.
 developers can create as
many user threads as
necessary and the
corresponding Kernel
threads can run in parallel on
a multiprocessor machine.
 Context means current state or information about process.
 The Context switching is a technique or method used by the
operating system to switch a process from one state to another
to execute its function using CPUs in the system.
 When switching perform in the system, it stores the old
running process's status in the form of registers and assigns
the CPU to a new process to execute its tasks.
 While a new process is running in the system, the previous
process must wait in a ready queue.
Process Creation:
 A process may create several new processes via a create
process system call.
 The process which creates other process, is termed
the parent of the other process, while the created sub-process
is termed its child.
 Process creation is achieved through the fork() system call.
 On a typical UNIX systems
the process scheduler is
termed as sched, and is given
PID=0. The first thing done by
it at system start-up time is to
launch init, which gives that
process PID=1. Pageout and
fsflush are responsible for
managing memory and file
systems. inted responsible for
networking function. dtlogin
serve as login screen. csh for
shell process. Netscape is a
browser. emac is one of text
editor.
 Interprocess communication is the mechanism provided by the
operating system that allows processes to communicate with
each other.

 A process can be of two types:


1. Independent process.
2. Co-operating process.
 An independent process is not affected by the other processes
executing in the system.
 Any process that shares data with other processes is a co-
operating process.
 Cooperating processes require an Interprocess communication
(IPC) mechanism that will allow them to exchange data &
information.
 There are two fundamental models of Interprocess
communication
1. Shared memory
2. Message passing
 A region of memory that is shared by cooperating processes is
established. Processes can then exchange information
by reading and writing data to the shared region.
 Communication takes place by means of messages exchanged
between the cooperating processes. Process A pass the
message to kernel & kernel will pass message to process B.
 Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the
same address space.
 It is useful in distributed environment, where the communication
processes may reside on different computers connected by a
network.
 A message passing facility provides at least two operations
(i) Send message (ii) Receive message

 Message sent by processes may either fixed or variable size.


 To send and receive messages in message passing, a
communication link is required between two processes. There
are various ways to implement a communication link.
1. Direct and indirect communication
2. Synchronous and asynchronous communication
3. Buffering
 In this method each process which wants to communicate
must name explicitly the sender or receiver of the
communication.
 The send() and receive() are defined as:
 send(P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
 In indirect communication, messages are sent and received
from mailboxes or ports.
 The processes can place messages into a mailbox or remove
messages from them. The mailbox has a unique identification.
 A mailbox can be owned by a process or by the operating
system.
 Communication happens using send() and receive().
 Message passing may be blocking or non-blocking also known
as synchronous and asynchronous.
 Blocking send: the sending process is blocked until the message is
received by the receiving process or by the mailbox.
 Non blocking send: the sending process sends the message and
resumes its operation.
 Blocking receive: the receiver blocks until a message is available.
 Non-blocking receive: the receiver retrieves either a valid message
or a null.
 Whether communication is direct or indirect, messages resides in a
temporary queue called buffer.
 Zero capacity: The queue has max length of zero. Thus, the link cannot
have any message waiting in it. In this case, sender must block until the
receiver receives the message.

 Bounded Buffer: the queue has finite length n, thus can keep up to n
messages, hence if the link is full the sender must block until space is
available in the queue.

 Unbounded buffer: the queues length is potentially infinite; thus any


number of messages can wait in it, hence is known as automatic buffering.
 In single processor system, only one process at the time &
others wait until CPU is free.
 In Multiprogramming systems, the Operating system
schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU scheduling.
 There are different processes that are waiting in the memory
that are ready to execute. Dispatcher will control the process
which is selected by scheduler.
 The dispatcher is the module that gives a process control over
the CPU after it has been selected by the scheduler.
 The time taken by the dispatcher to stop one process and start
another process is known as the Dispatch Latency.
 CPU scheduling decisions may take place under the following
four circumstances:
1. When a process switches from the running state to
the waiting state (for I/O request).
2. When a process switches from the running state to
the ready state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to
the ready state(for example, completion of I/O).
4. When a process terminates.
 When Scheduling takes place only under circumstances 1 and
4, we say the scheduling scheme is non-preemptive;
otherwise, the scheduling scheme is preemptive.
 Under non-preemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it
releases the CPU either by terminating or by switching to the
waiting state.
 Once the processor starts its execution, it must finish it before
executing the other. It can’t be paused in the middle.
 In this type of Scheduling, the tasks are usually assigned with
priorities. At times it is necessary to run a certain task that has
a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and
resumed later when the priority task has finished its execution.
 Thus this type of scheduling is used mainly when a process
switches either from running state to ready state or from
waiting state to ready state.
 There are many different criteria
1. CPU Utilization
2. Throughput
3. Turnaround Time
4. Waiting Time
5. Response Time
 We want to keep the CPU as busy as possible.
 To make out the best use of the CPU and not to waste any
CPU cycle, the CPU would be working most of the
time(Ideally 100% of the time).
 CPU is busy executing the processes, then work is being done.
 It is the total number of processes completed per unit of time.
(or)
 The total amount of work done in a unit of time.
 It is the amount of time taken to execute a particular process,
i.e. The interval from the time of submission of the process to
the time of completion of the process.

Waiting Time:

 Waiting time is the sum of the periods spent waiting in the


ready queue.
 Amount of time it takes from when a request was submitted
until the first response is produced.
 Response time is the time it takes to start responding.

Burst time

 Burst time is the amount of time required by a process for


executing on CPU.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy