0% found this document useful (0 votes)
28 views8 pages

Os Thread

Operating system

Uploaded by

pardeshirani07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views8 pages

Os Thread

Operating system

Uploaded by

pardeshirani07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

Process: Processes are basically the programs that are dispatched from the ready state and are

scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of process. A
process can create other processes which are known as Child Processes. The process takes more
time to terminate and it is isolated means it does not share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads and
these multiple threads are contained within a process. A thread has three states: Running, Ready,
and Blocked.
The thread takes less time to terminate as compared to the process but unlike the process, threads do
not isolate.

Process Thread
Process means any program is in
1. Thread means a segment of a process.
execution.
The process takes more time to
2. The thread takes less time to terminate.
terminate.
3. It takes more time for creation. It takes less time for creation.
It also takes more time for context
4. It takes less time for context switching.
switching.
The process is less efficient in
5. Thread is more efficient in terms of communication.
terms of communication.
We don’t need multi programs in action for multiple
Multiprogramming holds the
6. threads because a single process consists of multiple
concepts of multi-process.
threads.
7. The process is isolated. Threads share memory.
The process is called the A Thread is lightweight as each thread in a process
8.
heavyweight process. shares code, data, and resources.
9. Process switching uses an Thread switching does not require calling an operating
Process Thread
interface in an operating system. system and causes an interrupt to the kernel.
If one process is blocked then it
If a user-level thread is blocked, then all other user-
10. will not affect the execution of
level threads are blocked.
other processes
The process has its own Process
Thread has Parents’ PCB, its own Thread Control
11. Control Block, Stack, and Address
Block, and Stack and common Address space.
Space.
Since all threads of the same process share address
Changes to the parent process do space and other resources so any changes to the main
12.
not affect child processes. thread may affect the behavior of the other threads of
the process.
13. A system call is involved in it. No system call is involved, it is created using APIs.

Types of Threads
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread
User Level Threads

User Level Thread is a type of thread that is not created using system calls. The kernel has no work
in the management of user-level threads. User-level threads can be easily implemented by the user.
In case when user-level threads are single-handed processes, kernel-level thread manages them.
Let’s look at the advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel
Level Threads has its own thread table where it keeps track of the system. The operating System
Kernel helps in managing threads. Kernel Threads have somehow longer context switching time.
Kernel helps in the management of threads.
Advantages of Kernel-Level Threads
 It has up-to-date information on all threads.
 Applications that block frequency are to be handled by the Kernel-Level Threads.
 Whenever any process requires more time to process, Kernel-Level Thread provides more
time to it.
Disadvantages of Kernel-Level threads
 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-level thread.

Process Synchronization in OS
Processes Synchronization or Synchronization is the way by which processes that share the same
memory space are managed in an operating system. It helps maintain the consistency of data by
using variables or hardware so that only one process can make changes to the shared memory at a
time. There are various solutions for the same such as semaphores, mutex locks, synchronization
hardware, etc.
How Process Synchronization in OS Works?
For example, If a process1 is trying to read the data present in a memory location while another
process2 is trying to change the data present at the same location, there is a high chance that the
data read by the process1 will be incorrect.

 Entry Section: The entry Section decides the entry of a process.


 Critical Section: Critical section allows and makes sure that only one process is modifying
the shared data.
 Exit Section: The entry of other processes in the shared data after the execution of one
process is handled by the Exit section.
 Remainder Section: The remaining part of the code which is not categorized as above is
contained in the Remainder section.
Race Condition
When more than one process is either running the same code or modifying the same memory or any
shared data, there is a risk that the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource. Thus, all the processes race to say that my
result is correct. This condition is called the race condition. Since many processes use the same
data, the results of the processes may depend on the order of their execution.

What is the Critical Section Problem?

A part of code that can only be accessed by a single process at any moment is known as a critical
section. This means that when a lot of programs want to access and change a single shared data,
only one process will be allowed to change at any given moment. The other processes have to wait
until the data is free to be used.
The wait() function mainly handles the entry to the critical section, while the signal() function
handles the exit from the critical section. If we remove the critical section, we cannot guarantee
the consistency of the end outcome after all the processes finish executing simultaneously.
Mutual exclusion: If a process is running in the critical section, no other process should be allowed
to run in that section at that time.
Progress: If no process is still in the critical section and other processes are waiting outside the
critical section to execute, then any one of the threads must be permitted to enter the critical section.
No starvation: Starvation means a process keeps waiting forever to access the critical section but
never gets a chance. No starvation is also known as Bounded Waiting.
Sleep and wake up:
The concept of sleep and wake is very simple. If the critical section is not empty then the process
will go and sleep. It will be waked up by the other process which is currently executing inside the
critical section so that the process can get inside the critical section.

Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization problem, that is we
are trying to achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem, Producer is producing some items,
whereas there is one Consumer that is consuming the items produced by the Producer. The same
memory buffer is shared by both producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory
buffer.
Producer consumer problem is a classical synchronization problem. We can solve this problem by
using semaphores.
A semaphore S is an integer variable that can be accessed only through two standard operations :
wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation increases its
value by 1.

Semaphores are of two types:


1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problem with multiple processes.

2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.

Problem Statement – We have a buffer of fixed size. A producer can produce an item and can place
in the buffer. A consumer can pick items and can consume them. We need to ensure that when a
producer is placing an item in the buffer, then at the same time consumer should not consume any
item. In this problem, buffer is the critical section.

Dining Philosopher Problem Using


Semaphores
The Dining Philosopher Problem states that K philosophers are seated around a circular table with
one chopstick between each pair of philosophers. There is one chopstick between each philosopher.
A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not both.

The Dining
Philosopher Problem is a classic synchronization problem in computer science that involves
multiple processes (philosophers) sharing a limited set of resources (forks) in order to perform a
task (eating). In order to avoid deadlock or starvation, a solution must be implemented that ensures
that each philosopher can access the resources they need to perform their task without interference
from other philosophers.

One common solution to the Dining Philosopher Problem uses semaphores, a synchronization
mechanism that can be used to control access to shared resources. In this solution, each fork is
represented by a semaphore, and a philosopher must acquire both the semaphore for the fork to their
left and the semaphore for the fork to their right before they can begin eating. If a philosopher
cannot acquire both semaphores, they must wait until they become available.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy