Os Thread
Os Thread
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of process. A
process can create other processes which are known as Child Processes. The process takes more
time to terminate and it is isolated means it does not share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads and
these multiple threads are contained within a process. A thread has three states: Running, Ready,
and Blocked.
The thread takes less time to terminate as compared to the process but unlike the process, threads do
not isolate.
Process Thread
Process means any program is in
1. Thread means a segment of a process.
execution.
The process takes more time to
2. The thread takes less time to terminate.
terminate.
3. It takes more time for creation. It takes less time for creation.
It also takes more time for context
4. It takes less time for context switching.
switching.
The process is less efficient in
5. Thread is more efficient in terms of communication.
terms of communication.
We don’t need multi programs in action for multiple
Multiprogramming holds the
6. threads because a single process consists of multiple
concepts of multi-process.
threads.
7. The process is isolated. Threads share memory.
The process is called the A Thread is lightweight as each thread in a process
8.
heavyweight process. shares code, data, and resources.
9. Process switching uses an Thread switching does not require calling an operating
Process Thread
interface in an operating system. system and causes an interrupt to the kernel.
If one process is blocked then it
If a user-level thread is blocked, then all other user-
10. will not affect the execution of
level threads are blocked.
other processes
The process has its own Process
Thread has Parents’ PCB, its own Thread Control
11. Control Block, Stack, and Address
Block, and Stack and common Address space.
Space.
Since all threads of the same process share address
Changes to the parent process do space and other resources so any changes to the main
12.
not affect child processes. thread may affect the behavior of the other threads of
the process.
13. A system call is involved in it. No system call is involved, it is created using APIs.
Types of Threads
Threads are of two types. These are described below.
User Level Thread
Kernel Level Thread
User Level Threads
User Level Thread is a type of thread that is not created using system calls. The kernel has no work
in the management of user-level threads. User-level threads can be easily implemented by the user.
In case when user-level threads are single-handed processes, kernel-level thread manages them.
Let’s look at the advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
Implementation of the User-Level Thread is easier than Kernel Level Thread.
Context Switch Time is less in User Level Thread.
User-Level Thread is more efficient than Kernel-Level Thread.
Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.
Disadvantages of User-Level Threads
There is a lack of coordination between Thread and Kernel.
Inc case of a page fault, the whole process can be blocked.
Process Synchronization in OS
Processes Synchronization or Synchronization is the way by which processes that share the same
memory space are managed in an operating system. It helps maintain the consistency of data by
using variables or hardware so that only one process can make changes to the shared memory at a
time. There are various solutions for the same such as semaphores, mutex locks, synchronization
hardware, etc.
How Process Synchronization in OS Works?
For example, If a process1 is trying to read the data present in a memory location while another
process2 is trying to change the data present at the same location, there is a high chance that the
data read by the process1 will be incorrect.
A part of code that can only be accessed by a single process at any moment is known as a critical
section. This means that when a lot of programs want to access and change a single shared data,
only one process will be allowed to change at any given moment. The other processes have to wait
until the data is free to be used.
The wait() function mainly handles the entry to the critical section, while the signal() function
handles the exit from the critical section. If we remove the critical section, we cannot guarantee
the consistency of the end outcome after all the processes finish executing simultaneously.
Mutual exclusion: If a process is running in the critical section, no other process should be allowed
to run in that section at that time.
Progress: If no process is still in the critical section and other processes are waiting outside the
critical section to execute, then any one of the threads must be permitted to enter the critical section.
No starvation: Starvation means a process keeps waiting forever to access the critical section but
never gets a chance. No starvation is also known as Bounded Waiting.
Sleep and wake up:
The concept of sleep and wake is very simple. If the critical section is not empty then the process
will go and sleep. It will be waked up by the other process which is currently executing inside the
critical section so that the process can get inside the critical section.
Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization problem, that is we
are trying to achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem, Producer is producing some items,
whereas there is one Consumer that is consuming the items produced by the Producer. The same
memory buffer is shared by both producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory
buffer.
Producer consumer problem is a classical synchronization problem. We can solve this problem by
using semaphores.
A semaphore S is an integer variable that can be accessed only through two standard operations :
wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation increases its
value by 1.
2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.
Problem Statement – We have a buffer of fixed size. A producer can produce an item and can place
in the buffer. A consumer can pick items and can consume them. We need to ensure that when a
producer is placing an item in the buffer, then at the same time consumer should not consume any
item. In this problem, buffer is the critical section.
The Dining
Philosopher Problem is a classic synchronization problem in computer science that involves
multiple processes (philosophers) sharing a limited set of resources (forks) in order to perform a
task (eating). In order to avoid deadlock or starvation, a solution must be implemented that ensures
that each philosopher can access the resources they need to perform their task without interference
from other philosophers.
One common solution to the Dining Philosopher Problem uses semaphores, a synchronization
mechanism that can be used to control access to shared resources. In this solution, each fork is
represented by a semaphore, and a philosopher must acquire both the semaphore for the fork to their
left and the semaphore for the fork to their right before they can begin eating. If a philosopher
cannot acquire both semaphores, they must wait until they become available.