Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
3 views
Chapter 2-Process-And-Threads 241127 070507
subject
Uploaded by
Bishal Shah
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Chapter_2-Process-And-Threads_241127_070507 For Later
Download
Save
Save Chapter_2-Process-And-Threads_241127_070507 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
3 views
Chapter 2-Process-And-Threads 241127 070507
subject
Uploaded by
Bishal Shah
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Chapter_2-Process-And-Threads_241127_070507 For Later
Carousel Previous
Carousel Next
Save
Save Chapter_2-Process-And-Threads_241127_070507 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 48
Search
Fullscreen
Chapter -2 Process and Threads Process ‘A process is an instance of a program in execution, A program by itself is not a process; a program is a passive entity, such asa file containing alist of instructions stored on disks, (often called an executable file), whereas a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process when an executable file is loaded into memory. Even if the user is running a single program at a time, to support or control the particular program, the operating system executes several internal programmed activities. So we call these activities as a process * Program: A set of instructions a computer can interpret and execute. + Process = Dynamic — Part of a program in execution — alive entity, itcan be created, executed and terminated. —It goes through different states wait, running, Ready ete = Requires resources to be allocated by the OS = one or more processes may be executing the same code. © Program = statie — no states © This example illustrate the difference between a process and a program: main { inti, prod for (i=Osi<100;i+4) { prod = pord*i: J J + Iisa program containing one multiplication statement (prod 100 multiplications, one at a time through the ‘for" Loop. * Although two processes may be associated with the same program, they are nevertheless considered two separate execution sequences. For instance, several users may be running different copies of mail program, or the same user may invoke many copies of web browser program, Each of these is a separate process, and although the text sections are equivalent, the data, heap and stack section may vary. rod * i) but the process will execute Process States and its Transition © As the program executes, it generally changes states. The state of a process is defined as the current activity of that process. © Each process has one of the following states as shown in figure below. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)admitted Interrupt ext 0 or event completion hese AEP ic or avant wat Fig: Process state Transition diagram 1. New: The process is being created 2. Running: Instructions are being executed 3. Waiting: The process is waiting for some event to occur (such as I/O completion or reception ofa signal) 4. Ready: The process is waiting to be assigned to a processor 5, Terminated: The process has finished execution, Running states implies that the process is currently being run by CPU. Ready to run means that it need CPU attention and time to run i.e, process is waiting to be assigned to a processor. Waiting state implies that process is not running currently and is waiting for some event to oceur such as VO completion event. Process Control Block Proess sate * In operating system each process is represented by a process control block(PCB) + oratask control block. Provess ID mumiber 2h isa dau smetire that physically represent a process in the memory of a Progun Comer The PCB isa store from which OS locate key information about a process so when CPU yyistrs CPU switches (Context Switching) from one process to another, the OS uses PCB to s of process and uses this information when the controls retum back. Memory limits Information in PCB is updated during the transition of process states. When the process terminates then the PCB for that particular process is also List of open fies Released from memory. It contains many pieces of information associated with a specific process that includes the following 1. Process State: The PCB contains information about the current state i.e. Ready ,R of the process. 2. Process ID Number: It is the unique identification of process in order to track which information is of which process, 3. Program counter: PCB stores the address of the next instruction in the program. to be executed 4. Pointers: The PCB contains information about pointer to parent process and pointer to child process if it exists. 5. CPU Registers: The PCB contains the information about register Save area. 6. CPU-scheduling information: This information includes a information association with scheduling such as process priority. 7. Accounting Information: This information includes the amount of CPU time allocated, CPU time used, time limits, job or process number etc. Fig Process Contd Blok Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)8. 1/0 status information: ‘This information includes the list of /O device allocated to particular process, outstanding VO requests, /O devices (e.g., tape drives) assigned to this process, a list of files in use by the process, and so on, Operation on Process The set of operations on process includes 1. Create a Process © Aptocess can create several new process. © The creating process is called parent process while a new created process is called child of that process. © New process can further create a more child process thus forming tree like structure as shown in figure below. © In the above figure each child(B, C, D and E has only one parent A) has only one parent but one pareni(A has B,C, D and E child) has many child. © Every process will need certain resources (CPU time, memory, /O devices etc.) to accomplish its task, © Ifany process creates any sub process, then that sub process may use the resources by requesting from the operating system or parent may partition its resources among its child. © If. process is created trom another process such that the new process must complete its execution before the old one can resume then the process is said to be created synchronously. © Ifa new process is created from another process such that both two process i.e. new and old can run concurrently in pseudo parallel way, then the process is said to be created asynchronously. Write a C-program to create a new process using System Call Fork() in UNIX. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)int main() { pid_t pid; '* fork another process */ pid = fork); if (pid < 0) {/* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) {/* child process */ execip("/bin/Is”, Is", NULL); } else {/* parent process */ /* parent will wait for the child to complete */ wait (NULL); printf ("Child Complete"); exit(0); } : 2. Run a Process © A process runs or executes when it is loaded in main memory. 3. Suspend a Process © The process goes to suspended or blocked state when the process is waiting for some event such as VO completion event to occur i..c. the process is suspended when itis waiting for another process to complete its operation. ©All that process that has been suspended are in blocked state. 4, Get and Set Process Information © Assigning and retrieving CPU time, Priority, Process ID and other information related to process. 5. Process Termination (© Aprocess terminates when it finishes executing its final statement. © Atthis time, the process may returns data i.e, result to its parent process. © After this, all the resources including allocated memory, opened files, YO devices etc. are deallocated by the operating system and its PCB is also erased Process Scheduling + The main objective of multiprogramming is to have some process running at all time, so as to have maximum CPU utilization, The main objective of time sharing system is to switch the CPU among several processes so frequently that the user can interact with each program while itis running. All these processes are arranged in scheduled manner to CPU. + Figure below shows a common representation of process scheduling in the queuing diagram. The rectangular box represents a queue while the circle represents the resource that serve the queue. As process enters the system, they are put in Job queue. This queue consists of all the process in the system, Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Some of them are stored in main memory waiting for CPU time, so they are placed in the list called Ready Queue. vo VO queue VO request time slice expired child fork a exoeutos, chile ‘interrupt wait foran occurs. interrupt Fig: Queuing Representation of Process Scheduling + New process is always kept in Ready queue. It waits in ready queue until itis selected for execution. One it is allocated, it starts executing, Then several event may occur like below. © When the process is allocated on the CPU, it is executed for a while and eventually terminated or waits for particular events to occur such as Vo Request. © As we know system consists of number of processes requesting for limited resources like disk. If the disk is busy with I/O request for some other process then it that case the process has to wait for disk, such process is placed in queue called VO queue. © A process may ereate another asynchronous sub process then it must wait in some queue for its (sub process) termination. © A process time quantum may be expired then it might be interrupted and put back in ready queue. * So this process continues this cycle until the process terminates. Concurrent Process 1. Concurrent processing is achieved by having multiple CPUs, each executing a different process or part of a process simultaneously. 2. Concurrent process or co-operating processes are those which executes simultaneously and may affect or get affected by other processes executing in the system, 3. Concurrent processes come into conflict with each other when they are competing for the use of same resourves such as YO device, memory, processor time, shared files etc. if there is no exchange of information between the processes i.e. IPC (Inter process communication) 4. The best example are transaction processes in airline reservation system. They share common database, update and read same data on shared basi 5. Concurrent processes or concurrency can be achieved by i, Hardware parallelism: CPU can be computing while one or more I/O devices are running at the same time. This is one which is actually implemented in multiprogramming environment. ji, Pseudo Parallelism: Rapid switching of the CPU among processes in known as pseudo parallelism as it gives the effect that many processes are running concurrently. This is actually implemented in time sharing system, Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Real parallelism: Actual parallelism is achieved by having multiple CPUs each executing different process or a part of a process simultaneously © The main properties of concurrent process are ‘They share some resources. i, They are subject to Race Condition Condition ‘* The situation where two or more processes are reading or writing some shared data & the final results depends on who runs precisely when are called race conditions. * So race condition is a situation where several processes are accessing and manipulating the same data concurrently and the outcome of the execution depends upon the particular order in which the access take place. © To guard against the race condition, some form of synchronization is necessary among the co-operating processes. Example2: Spocer direcory a] ate out=4 8 poan 1 in=7 ProoessB Fig: Two proceSses want to acbess sharef! memory at the same time * To see how inter-process communication works in’ practice, let us consider a simple but common example, a print spooler. When a process wants to print a file, it enters the file name in a special spooler directory. Another process, the printer daemon, periodically checks to see if there are any files to be printed, and if there are, it prints them and removes their names from the directory ‘Imagine that our spooler directory has a large number of slots, numbered 0, 1, 2... each one capable of holding a file name. Also imagine that there are two shared variables, i out: which points to the next file to be printed in: which points to the next free slot in the directory. * Ata certain instant, slots 0 (0 3 are empty (the files have already been printed) and slots 4 to 6 are full (with the names of files to be printed). ‘© More or less simultaneously, processes A and B decide they want to queue a file for printing as shown in the above fig, Process A reads “in” and stores the value, 7, ina local variable called next_free_slot © Just then a clock interrupt occurs and the CPU decides that process A has run long enough, so it switches to process B. © Process B also reads value of variable in, and also gets a 7, so it stores the name of its file in slot 7 and updates in to be an 8. Then it goes off and does other things. © Eventually, process A runs again, starting fiom the place it left off last time. Itlooks at next_free_slot, finds a 7 there, and writes its file name in slot 7, erasing the name that Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)process B just put there. © Thenit computes mext_free_slot + 1, which is 8, and sets in to 8, The spooler directory is now intemally consistent, so the printer daemon will not notice anything wrong, but proce: B will never receive any output Example2: ‘© Consider two coopering processes sharing variable A and B and having following set of instruction Process Process, B+ Be? ‘© Suppose our intension is to get A=3 and B=4 afier execution of both the processes. © If the interleaved order of execution is like this, B*2 ‘Then, A will get 3 and B will get 4 as desired. © But if the interleaved order of execution is like this Asl BY2 B41 Then, A will get 5 and B will get 4 which is not desired. © Thus the outcome of the interleaved execution depends on the particular order in which the access take place. * To solve this problem, the shared variable A and B shouldn’t be allowed to access simultaneously. + We can avoid this through the synchronization mechanism. Introduction to IPC ‘© Processes within a system may be independent or cooperating, A process is independent if it cant affect or be affected by another process. ‘© A process is co-operating if it can affect other or be affected by the other process. © Any process that shares data or any resources with other process is called co-operating process. Processes may be running on one or more computers connected by a network. * Processes frequently needs to communicate with each other © For example, in a shell pipeline, the output of the first process must be passed to the second process and so on down the line. ‘© Thus there is a need for communication between the process * Processes that are cooperating to get some job done often need to communicate with one another and synchronize their act called inter-process communication. IPC enables one application to control another application, and for several applications to share the same data without interfering with one another. ities. This communi tion is Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)IPC is also useful in distributed environment where the communication processes may reside on different ‘computers within a network. * Process cooperation is necessary for addressing the following issues i, Information sharing: Several users/processes may be interested to access the same piece of information (say a shared variable or file). We must allow concurrent access to such information. i, Computation Speedup: In order to increase the computation speed for a given large task, we can break it into subtask, each of which will be executing in parallel with the others. This can be achieved only if we have multiple CPUs. ii, Modularity: We may want to construct the system in a modular fashion ic. dividing the system function into separate processes or thread. Convenience: A single user may have many task fo work at one time, Example: A user may be editing, printing and compiling in parallel ‘Two Fundamental Ways of IPC > Shared Memory or Original Sharing ‘Here a region of memory that is shared by co-operating process is established. ‘Figure below show communication model using Shared Memory. process A 1 shared i process B lol kemel © Process can exchange the information by reading and writing data to the shared region. © Shared memory allows maximum speed and convenience of communication as it can be done at the speed of memory within the computer: System calls are required only to establish shared memory regions. Once shared memory is established no assistance from the kernel is required, > Message Passing or Copy Sharing IPC is best provided by message passing system. ‘+ Figure below show communication model using message passing Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)* Communication takes place by means of messages exchanged between the co-operating process In message passing approach, the information to be shared is physically copied from the sender process address space to the address space of the receiver process. And this is done by transmitting the data (0 be copied in the form of message (a message is a block of information.) Since, computer in a network do not share memory, process in distributed system normally communicate by exchanging message rather than through shared data, Therefore, message passing is the basic IPC mechanism in distributed system © Easier to implement than shared memory. ‘+ Slower than that of Shared memory as message passing system are typically implemented using system call which requires more time consuming task of Kernel intervention. ‘* Message passing is useful for exchanging the smaller amount of data. © Itenables process to communicate by exchanging message and allow program to be written by using communication primitives such as send and receive. © Example: Chat Program in WWW. Process A Send (B, &message) //send message to destination process B Process B Receive (A, &message) /Ireceive message from source process A Here, a link is established automatically between every pair of processes that want to ‘communicate. The processes only need to know others identity to communicate. However, If no message is available, the receiver could block until one arrive. Critical Region Each process has a different section of a code. + A section of code or set of operation in which process may be changing shared variable, updating a common file or table is known as critical region or critical section. © So simply critical region is the part of program where the shared resources is accessed. * It should be ensured that critical section of a process should not be executed simultaneously with the critical section of another process. © Example Suppose two or more processes require access to a single non-sharable resource such as a printer. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)During the course of execution, each process will be sending commands to the VO device, receives status information, sending data, and / or receiving data. Such resource is called critical resource and the portion of the program that uses itis called critical section of the program. However only one program at a time be allowed in its critical section * This is ensured by synchronization mechanism, do { entry section | critical section. ext remainder section }while (TRUE) ; Fig: general structure of typical process © Each process must request for permission to enter its critical section. The section of the code implementing this request is entry section ‘© The critical section may be followed by an exit section, © The remainder section of code or set of operation that do not belongs to its critical section is called remainder section, © When one process is executing in its critical section, no other process is to be allowed to executing in its critical section, Thus the execution of critical section by the processes must be mutually exclusive in time. * Although, we could avoid race condition using above technique, we need additional four conditions to hold to have a good solution, i No process may be simultaneously inside their critical section. No assumptions may be made about the speeds or the number of CPUs No process running outside its critical section may block other processes. No process should have to wait forever to enter its critical region. Mutual Exclusion and its implementation © Mutual exclusion is a mechanism to ensure that only one process is doing certain thing at one time and other are prevented from modifying the shared resources (say file) until the current process [enters excl region / ene Process A ; 1 1 1 1 1 1 1 1 i | Batiomptsio | Bentors | Blsaves ; | entor ical | otal region | crtal gion ‘region 1 1 L 1 Process 8 - ’ 1 so 1 : 1 Bblocked + " 1, 1, Time ——> Compiled by: Bhesh Inapa(pheesnmathapa(@emall.com)Fig: Mutual Exclus n using Critical Region © Implementation of mutual exclusion Let us consider the example of Too much Pizza problem. let there are two person A and B having schedules as shown in figure below. Time Person A Person B 3.00) Look in fridge(No Pizza) 3.08 Leave For store 3.10 Arrive at Store Look in fridge(No Pizza) 3.15 Buy Pizza Leave For store 3.20 Leave the store Arrive at Store 3.25 Arrive home, place the pizza in fridge Buy Pizza 3.30 Leave the store 3.35 ‘Arrive home, place the pizza in fridge * The problem shows that when two co-operating processes are not synchronized, they may face unexpected error due to race condition, © Mutual exclusion mechanism when applied to this problem will ensure that only on person buys pizza at one time, + Solution T It is assumed that a note can be put by either of them to indicate that they are leaving for store to buy pizza. Process A and B If¢nopizza) ( If(nonote) ( Leave note; buy pizza; remove note; } } + Solution It Process A Leave NoteA Leave NoteB save Note Moone If(noNoteA) ( U¢nopizza) “ee If(nopizza) Buy pizza; ‘ , Buy pizza; ) 4 4 Remove NoteA Compiled By: Bhesh Thapa(bheeshmath; Remove NoteBSolution 2 can leave two processes in a state where both the processes if leave their own notes simultaneously, then their if statement will return false and thus none of them will be able to execute its instruction for buying pizza. Process B © Solution IT Process A While(NoteA) { Leave Note #1d0 nothing If(noNoteB) 1 ( ; Leave NoteB If(nopizza) Iftnopizza) t ( Buy pizza; Buy pizza; } t 1 Remove NoteB Remove NoteA Solution’ tends to leave process B in a state where it consumes CPU cycle while waiting its turn (0 execute the instruction following the while condition, This technique is also called as busy waiting. All three solutions are obtained by implementing mutual exclusion mechanism. Techniques for av. ing Race Con jon/ Techniques for Achieving Mutual Exclusion i. Mutual exclusion with busy waiting Sleep and Wake Up ‘Semaphore iv, Monitor v. Message Passing Mutual Exclusion with Busy waiting © These methods work on the principle of busy waiting i.e. continuously testing a variable and waiting for some value to appear. © Drawback No other code can be executed. CPU-time is wasted. ‘© Itis easier to implement and less error prone as well. ‘© There are number of techniques to achieve mutual exclusion based on busy waiting as given below Lock Variables TSL instruction Disabling Interrupts Strict Alteration Dekker’s Algorithm Peterson's Solution cube n Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)1. Lock Variables # Itis software solution, © The most common technique for serializing resources is to lock the resources, + The state of lock determines whether the resources is currently in use by process or not. + The simplest case of lock is. bit that is set to one when a resources is in use and set to zero when the resource is not in use * Asingle, shared, (lock) variable, is initially set to 0. © When a process wants to enter its critical region, it fist tests the lock. + Ifthe lock is 0, the process sets it to I and enters the critical region. * Ifthe lock is already 1, the process just waits until it becomes 0. + Thus, a. means that no process is in its critical region, and a 1 means that some process is in its critical region. © Drawbacks: > Ippose that one process reads the lock and sees that it is 0. Before it can set the lock to 1, another process is scheduled, runs, and sets the lock to 1. When the first process runs again, it will also set the lock to 1, and two processes will be in their critical regions at the same time. > Consider two processes, PI and P2, and two critical resources, RI and R2. Suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situations: R1 is assigned by the OS to P2, and R? is assigned to P!. Each process is waiting for one of the two resources. Neither will release the resources that is already owns until it has acquired the other resources and performed its critical section, In such case both processes are deadlocked. > Suppose that three processes, Pl, P2, and P3, each requires periodic access to resource R. consider the situation in which PI is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that P3 is granted access and that before it completes its critical section; P1 again requires access. If PI is granted access after P3 has finished, and if PI and P3 repeatedly grant access to each other, then P2 may indefinitely be denied access to the resources, even though there is no deadlock situation. Such problem is known as starvation, 2. Test and Lock + TSLRX, LOCK + TSL instruction is used for reading from a location or writing to a location in the memory and then saving a non-zero value at an address in memory + Itis implemented in the hardware and is used in a system with multiple processors. * When one processor is accessing the memory, no other processor can access the same memory location until TSL instruction is finished. * Locking of the memory bus in this way is done in the hardware. © Many computers which is designed with multiple processor concepts have hardware instruetion called Test and set Lock. «© Itreads the contents of the memory word LOCK into register RX and then stores a nonzero value at the memory address LOCK. The operations of reading the word and storing into it are guaranteed to be indivisible no other processor can access the memory word until the instruction is finished. The Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)CPU executing the TSL instruction locks the memory bus to prohibit other CPUs from access ‘memory until itis done, + The below example is an assembly implementation pseudo code for TSL instruction enter_segion’ TSL REGISTER,LOCK copy LOCK to register and set LOCK to 1 CMP REGISTER,#0 was LOCK zero? INE enter_region Ait was non zero, LOCK. was set, so loop RET [return to caller; critical region entered eave region: MOVE LOCK, #0 tote a 0 in LOCK. RET jreturn to caller © One solution to the critical region problem is now straightforward. + The first instruction copies old value of lock to the register and then set the lock to 1, then the old value is compared with zero. If it in non zero, the lock is already set, so the program just goes back to the beginning and test it again. Later on it become zero when the process currently in its critical section had complete and leave its critical section and return to its subroutine, © So simply, before entering its critical region, a process calls enter_region, wi until the lock is free; then it acquires the lock and returns. © After the critical region the process calls leave_region, which stores a 0 in LOCK. ‘+ As with all solutions based on critical regions, the processes must call enter_region and leave_region at the correct times for the method to work. Ifa process cheats, the mutual exclusion will fal. ich does busy waiting + Test-and-set technique suffers from two problems: 1) High bus traffic problem since many processes might be requesting and waiting for lock and 2) Unfaimess since some processor might be starved from acquiring lock. 3. Disabling Intermpts: © Itis a hardware approach to achieve mutual exclusion. © The simplest solution is to have each process disable all interrupts just after entering its critical region and re-enable them just before leaving i while (TRUE) { F*disable interrupts*/ Heritical section enable interrupts" something_else(); © With interrupts disabled, no clock interrupts can occur. «The CPU is only switched from process to process as a result of clock or other interrupts, after all and with interrupts turned off the CPU will not be switched to another process. Thus, once a process has disabled interrupts, it can examine and update the shared memory without fear that any other process will intervene. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)* However, this method has a serious problem and hence is an unattractive technique because it is unwise to give user processes the power to turn off interrupts. Suppose that one of them did, and then never turned ther on again then that will be the end of the system. Furthermore, if the system is a multiprocessor, with two or more CPUs, disabling interrupts affects only the CPU that executed the disable instruction, The other ones will continue running and can access the shared memory. © Iris frequently convenient for the kernel itself to disable interrupts for a few instructions while itis updating variables or lists. Ifan interrupt occurred while the list of ready processes, for example, was in an inconsistent state, race conditions could occur. + The conclusion is that disabling interrupt is often useful technique within OS itself but is not appropriate as a general mutual exclusion mechanism for user processes. Note: 1 The Enable Inerrupis (ED and Disable Interrupts (DD) instructions allow the MP to permit or derw interrupts under program control. For the El, the interrupts will be enabled following the conppletion of the next instruction following the EI. This allows at least one more instruction, perhaps a RET or JMP, to be executed before the MP allows itself to again be interrupted 4, Strict alteration © In this method say, there are two processes Process 0 and Process | and we use a shared integer variable turn which is initially set to zero, + The turn variable keeps track of whose turn it is to enter the critical region and provide mutual exclusive access to critical region, * This solution require that two processes strictly alternate (PO.P1,PO and so on) in entering their critical region. # Then code for process PO and P1 can be written as while (TRUE) while (TRUE) t { while (turn != 0) { do nothing) while (turn != 1) { / do nothing} critical_region(); critical_ region(); tum =0; mats noncritical_ region); nonctitical_region(); } ) Process PO Broce Ft © When a Process PO see that turn is 0, it enters it critical region. Now, Process PI see that turn is 0, so it has to busy wait until tum becomes 1. + When process PO leaves the critical region, it sets tum to I, to allow process 1 to enter its critical region. * By this method mutual exclusion is achieved as each process can go from its non-critical region to the critical region by checking the value of tur. © Drawbacks > Problem with this approach is that it increases the processing overhead and CPU usage is high as it is based on busy waiting technique. > Also, a process may have to busy wait long enough. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)} This solution is not better solution when one process is much slower/faster than other process. Because this situation may violate third rule for achieving good solution to critical section problem,” No process running outside its critical section may block other Processes “. Suppose that PO is very fast that it enters its critical region, set turn to | and comes buck (0 non-critical region, Now if again PO want to enter the critical seetion then it is not allowed to do so because turn is | and process | is busy with its non-critical region. 5. Dekker’s Algorithm * Dekker’s algorithm is the first known correct software solution to the mutual exclusion problem in concurrent programming. # This algorithm combines the ideas of using turn variable and two flag variables. * This algorithm is applicable to only two processes at a time. + Let us consider two process PO and PI © Ivo processes attempt to enter a eri process in, based on whose turn it is. I section at the same time, the algorithm allow only one + This solution doesn’t require strict alteration ie. initially a process can enter its critical section without accessing tur, # Ione process is already in the critical section, the other process will busy wait for the first process toexit. This is done by the use of two Boolean flags, wants to_enter [0] and wants_to_enter [1], which indicate an intention to enter the critical section on the part of processes 0 and |, respectively, and a shared variable turn (initially initialized to 0 or 1) that indicates who has priority between the two processes, Process PO Process PI ants to_enter [0] “ie: ‘wonts_to_enter [1] -——————anite (wants to enter [1] = true) while (wants to enter [0] t t iftom +0 iftum 21 { { nants to enter [0] — false ‘wants_to_enter [1] = false wvhile tum #0 — while torn # 1 ‘ ‘ bosy wait ‘busy wait ) } ‘wants to enter [0] true ‘wonts_to_enter [1] — true } 3 ) } Ji
(ates critical section tom 1 fom —0 ‘wants_to_enter [0] < false wants to_enter [1] — false + Claims the acces to critical region > Wait if other process is alredy or claim|in its crit if waiting for the critical region but notfour turn |__peave the critical region Wait until our turn come <———— enter critical region | region Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)In order to enter critical section, process PO set the flag want_to_enter[0] =true and wait if other process is already busy in its critical region. If another process has not set the flag want_to_enter[O}, then process pO will simply enter its critical region. But if both process is willing to enter their critical section by setting wants_to_enter[iJtrue then the variable turn will determine which process enters its critical section. 6. Peterson’s Algorithm An established algorithn known as Peterson’s Algorithm provides the correct solution to critical section problem, In this algorithm, two variables have been defined, one array i.e, Flag and another turn variable. Flags can be both be initially set to either set to true or false and turn variable can be set to either 1 or 0. It is nearly similar to Dekker's algorithm but the difference is after setting our flag we immediately give away the tur, by waiting on the and of two conditions, we avoid the need to clear and reset the tum flag. Process PO Process P1 Flag [1] =true; Turn: 1; Turn=0; While (Flag [1] and tumn=1) While (Flag [0] and turn=0) { { do nothing do nothing } }
Flag [0] =false; Flag [1] =false:
In order to enter the critical section, process PO first set flag(O]=true and assume that itis other process tum to enter the critical section by setting turn=1. The result of while condition will decide which process will enter the critical section. Ifthe flag{ }=true, then process pI is allowed to enter the critical section but if itis false, then process, 0 is allowed to enter the critical section If flat{0] and flag[1} both are true then, the value of turn variable decides which process will enter the critical section Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Sleep and Wakeup: ‘+ The above solutions based on busy waiting discussed yet suffers from two problem i, Requires busy waiting Priority inversion Problem Suppose we have two process named H and L such that H is high priority process and L is tow Now if L is in its critical region, and H become ready to run then H begins busy waiting, but since L is never scheduled while H is running, L never gets the chance to leave its critical region, so H loops forever. This situation is called priority inversion problem. + Sleep and wake up is an IPC primitives that block the processes instead of wasting CPU time when they are not allowed to enter their critical section, * Sleep and wakeup are system calls that blocks process instead of wasting CPU time when they are not allowed to enter their critical region, ‘Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up. © The wakeup call has one parameter ie. the process to be awakened, Implementation of Sleep and Wake Up in Producer-consumer problem (Bounded Butte * In this problem, two processes share a common, fixed-size butter. © One of them, the producer, puts information into the buffer, and the other one, the consumer, takes it out, =O + Now the problem arises when 2. The producer works faster than consumer, then the producer wants to put a new data in buffer but all the buffer is full. Solution: Producer goes to sleep and to be awakened when the consumer has removed data, 3. The consumer faster than producer, then the consumer wants to remove data the buffer but buffer is already empty. Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes ‘consumer up. © We can solve this problem by using sleep and wake up system call as given below. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)#define N 100 in count = 0; void producer(void) t in item; while (TRUE){ item = produce item (): if (count = N) sleep (); insert_item(item); count = count + 1; if (count = 1) wakeup(consumer); } void consumer(void) t int item; while (TRUE) { if (count = 0) sleep (); item = remove _item (); count = count - 1; buffer */ if (count ==N - 1) wakeup(producer); consume_itemfitem); Fig: The producer-consumer problem with a fatal race conditi In the above solution. N=Size of Buffer /* number of slots in the buffer */ /* number of items in the buffer */ /* repeat forever */ /* generate next item */ ‘* if buffer is full, go to sleep */ /* put item in buffer */ /* increment count of items in buffer */ /* was buffer empty? */ /* repeat forever */ /* if buffer is empty, got to sleep *//* take item out of buffer */ /* decrement count of items in /* was buffer full? */ * print item */ Count= a variable to keep track of the no, of items in the buffer. Producers code: © The producers code is first test to see if count is N. If is, the producer will go to sleep ; if itis not the producer will add an item and increment count. Also if the buffer has at least one item, then producer wake up consumer, Consumer code: # Ihis similaras of producer. ‘First test counts to see if itis 0. If itis, go to sleep; © Ifitis nonzero remove an item and decrement the counter. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)* Also if there is atleast one space av ible in buffer, then consumer wakeup producer Problem with this solution © This solution is subjected to fatal race condition Let the buffer is empty and the consumer has just read count to see if itis 0, At that instant, the scheduler decides to stop running the consumer temporarily and start running the producer. The producer creates an item, puts it into the buffer, and increases count. Because the buffer was empty prior to the last addition (count was just 0), the producer tries to wake up the consumer. © Unfortunately, the consumer is not yet logically asleep, so the wakeup signal is lost. © When the consumer next runs, it will test the value of count it previously read, find it to be 0, and go to sleep. © Sooner or later the producer will fill up the buffer and also go to sleep. Both will sleep forever. Semaphore ‘+A semaphore S* is a synchronization tool, which is an integer variable that constitute a mechanism to resolve resource conflicts when several processes access a common resource in concurrent processing environment. So this allow mutual exclusive access to critical section. © A.semaphore is an integer variable that can have value 0 to indicate no wake up were saved, or some positive value if one or more wakeups are pending. ‘+ Asemaphore variable afier being Wait or P for wait to test and Si alized and can be accessed only through two atomic operations Lor V for signal to release, © If Sis the semaphore variable, then the operation performed by semaphore by using P and V are P or Wait Operation ‘V oF Signal Operation POS) vis) While(S<=0) + Do Nothing 4 Se ‘+ The down operation on the semaphore checks to see ifthe value is greater than O, if so, it decrements the value, But if the value is 0, the process is put to sleep without completing down for the moment. The up operation increments the value of semaphore addressed. If one or more processes were sleeping on that semaphore, unable fo complete earlier down operation, one of them is chosen by the system (e.g.: FIFO, Random) and is allowed to complete its down, ‘+ Figure below shows the structure that defines basic semaphore operations Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)struct semaphore { int count; queueType queue; hi void senWait (semaphore s) uf s.count, 4£ (s.count < 0) { /* place this process in s.queue */; /* block this process */ ) } void semSignal (semaphore s) { s.count++; 4£ (s.count<= 0) { /* remove a process P from s.queue */ /* place process P on ready list */; } } © Characteristics of Semaphore 4, Semaphore are like integer variable except that they have no negative value. 5. Only Pand V operations is possible on semaphore. 6. The operation P and V are atomic © There are two types of semaphore 1. Binary Semaphore: © A.semaphore that are initialized to one and used by two or more processes to ensure that only one of them enter its critical region at same time are called binary semaphore, © They are used to acquire locks, © Binary semaphores have 2 methods associated with it and they are: Up and down / lock, unlock. © Binary semaphores can take only 2 values (0/1) © When a resource is available, the process in charge set the semaphore to 1 else 0. 2. Counting Semaphore: © Counting Semaphore may have value to be greater than one. (© Typically, it is used to allocate resources from a poo! of identical resources, Solving the producer consumer problem using semaphore ‘The solution to this problem is obtained by using three semaphore variable. We assumed a N buffer free 7. Binary Semaphore ‘Mutex’, provides mutual exclusion so that both producer and consumer do not access the buffer at same time, and is initialized to 1. So to achieve mutual exclusion, the semaphore variable is first initialized to 1 and P ( is called before the critical section and V () is called after the critical section. Semaphore $ Waits) Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)8. Counting Semaphore “Empty” for counting the number of slots that are empty. 9. Counting Semaphore “full’ for counting the number of slots that are full © We can solve this problem by using semaphore as given below. Fdefine N 100 7* number of slots in the buffer *7 int semaphore, /* semaphores are a special kind of int */ semaphore mutex = 1; / controls access to critical region */ semaphore empty =N; /* counts empty buffer slots */ semaphore full /* counts full buffer slots */ void producer(void) t int item; while (TRUE){ /* TRUE is the constant 1 */ item=produce_itemQ; —_/* generate something to put in buffer */ down(sempty);, /* decrement empty count */ down(émutex); /* enter critical region */ insert_item(item), /* put new item in buffer */ uup(Senmutex), /* leave ctitical region */ up(&fill), /* increment count of full slots */ ) 3 void consunen(vaid) { int item; while (TRUE){ * infinite loop */ down(&full), /* decrement full count */ down(Samutex), /* enter critical region */ item=remove_item0; _/* take item from buffer */ up(Semutex); /* leave critical region */ ‘up(&empty), /* increment count of empty slots */ consume item(item); —_/* do something with the item */ } } Monitor ‘Compiled By: Bhesh Thapa(bheeshmathapa@email.com)‘A semaphore is a good solution for IPC but sometime a situation called deadlock may arise when both. the process (producer and consumer) stay blocked ever and no more work would ever be done. * This problem isarising when semaphore variable mutex is decremented before semaphore variable empty in producer code. © If the buffer were completely full, the producer would block, with mutex set to zero. ‘+ Consequently, the next time the consumer tried to access the buffer, it would do down on the mutex, now: and block too. © So both stay blocked forever and no more work would ever be done. This unfavorable situation is called deadlock. * This scenario clearly shows how careful we must be while using semaphore. ‘© Monitor is high level synchronization/ abstraction primitive that combine i Shared data Operation on data iii, Synchronization with condition variable * Programming language such as Pascal, java provides monitor. The general syntax for monitor is shown. below Monitor integer I; condition c; procedure producer (x); ample end; procedure consumer (x); end; end monitor; When a process calls a monitor procedure, the first few instructions of the procedure will check to see if any other process is currently active within the monitor. If so, the calling process will be suspended until the other process has left the monitor. Ifno other process is using the monitor, the calling process may enter. Processes may call the procedures in a monitor whenever they want (0, but they cannot directly access the monitor's internal data structures from procedures declare outside the monitor.+ Similarly, a procedure defined within a monitor can access only those variable declared locally within the monitor. + The monitor constructs ensure that only one process at a time can be active within the monitor. * At the time of variable declaration, we declare two variable of type condition. Condition variable are not counters and they are used to block processes when they can't proceed ‘© Two operation are permitted by the condition variable ie. wait and signal. + Wait block the calling process and place that process on a queue associated with that variable, * Signal operation allow a waiting process to re-enter the monitor. We can solve producer consumer problem using message passing as below ‘Monitor ProducerConsumer condition full, empty: int count; procedure enter(); t item=produce_item(); /Igenerate something to put in butter if (count = N) wait(full); if buffer is full, block ut_item( item); # put item in butter ‘count = count +1; /Hincrement count of full slots if (count = 1) signal(empty);_// if buffer was empty, wake consumer 4 procedure remove(); i if (count = 0) waitempty); _// if buffer is empty, block item= remove_item(); remove item from buffer count = count - 1; 1/ decrement count of full slots if (count = N-1) signal(full);_//if buffer was full, wake producer consume_item(item); do something with item } count = End Monitor; Producer(): ‘ while (true) ‘ ProducerConsumer.enter(); __/f call enter function in monitor? ; } Consumer); ‘ while (true) ‘ ProducerConsumer.temove();_// call remove function in monitor Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Fig: A Skeleton of producer-consumer problem with monitor ‘This solution overcome the problem of how careful we must be while using semaphore and lost wake up signal while using Sleep and wakeup. Say, if the producer inside a monitor procedure discover that buffer is full, it will be able to complete the wait operation without worrying about the possibility that the scheduler may switch to the consumer just before the wait completes because of automatic mutual exclusion on monitor procedures. Also, by making mutual exclusion of critical region automatic, monitor make parallel programming much less error-prone than with semaphore. Message Passing © When processes interact with one another, two fundamental requirements must be satisfied: Synchronization and communication, ‘Synchronization is needed in order to achieve mutual exclusion and communication need to be done in order to exchange information. ‘+ Both these functions can be implemented using a message passing. © Message passing are actually implemented in distributed system. It enables process to communicate by exchanging message and allow program to be written by using ‘communication primitives such as send (Destination, &Message) and receive (Source, &Messaze). ‘A process sends information in the form of message to another process by executing send () system call © A process receives information by executing receive () system call, © In other to have proper communication between two processes, they must have some level of, synchronization © The receiver can’t receive a message until it has been sent by another process. If no message is available, the receiver should be blocked until one arrives. Producer consumer problem wit message passing Assumption * In this solution, a total N messages is used, analogous to N slots in shared memory buffer. © The consumer sends out by sending N empty message to the producer. + Whenever the producer has an item to give to a customer it takes an empty message and sends back a full one. ©The problem arises when i If the producer works faster than the consumer, all the message will end up full, waiting for customer, the producer will be blocked, waiting for an empty to come back. If the consumer works faster, then all the message will be empties for the producer to fill them, up, then the customer will be blocked, waiting for a full message. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Message Passing: Producer-Consumers Problem with N messages #define N 100 /* number of slots in the buffer */ void producer(void) { int item; message m; /* message butter */ while (TRUE) { item = produce _item(); /* generate something to put in buffer */ receive(consumer, &m); /* wait for an empty to arrive */ build_message(&m, item); /* construct a message to send */ send(consumer, &m); /* send item to consumer */ } } void consumer(void) { int item, f; message m; for (i= 0; i < N; i++) send(producer, &m); /* send N empties */ while (TRUE) { receive(producer, &m); /* get message containing item */ item = extract _item(&m); /* extract item from message */ send(producer, &m); /* send back empty reply */ consume _item(item); /* do something with the item */ } d Classical IPC problem Dining Philosophers Problem ‘The Readers and Writers Problem The Sleeping Barber Problem 1. Dining Philosophers Problem ‘The dining philosophers problem is useful for modeling processes that are competing for exclusive access to limited number of resources such as VO devices. In Dining Philosophers problem, five philosophers are sitting around the circular table discussing philosophy and eating. In the center of table, there is a plate of rice and table is laid with five chopstick as shown in figure below. Compiled By: Bhesh‘The problem is that, each philosopher need two chopsticks to eat but there are only five, one between each two philosophers. Life of philosopher consists of altemate periods of eating and thinking. Whenever a philosopher gets hungry, he tries (o acquire his left and right chopstick one at time in either order. If he became successful in acquiring both chopstick, he eats for a while and then puts down the chopstick and continue to think. ‘The key question is i, What if all philosopher acquires the left chopsticks and no right chopstick is available? ii, What if this process keeps on repeating even after waiting for a short interval of time? So this leads to situations called deadlock or starvation. This problem was designed to illustrate how to avoid deadlock and starvation when several process are competing for exclusive access to limited number of resources such as HO devices. One of the solution to this problem is binary semaphore in which the chopstick is represented by binary semaphore So a philosopher tries to grab the chopstick by executing P i.e. wait operation. The chopstick is released by executing V i.e. Signal operation on semaphore, ‘The structure is shown below define N5 var chopstick: Array[0... 4] of semaphore, all the element of chopstick are initialized to 1 void philosopher(int i) // I philosopher numbered from 0 to 4 { While true() { P(chopsticki);//take left chopstick P(chopstick{(i#1) MOD 5)); take right chopstick Eat; Vichopstickfi});/Mrelease left chopstick \Vichopstick| (+1) MOD 5]; //release right chopstick Think(); J 1 The above solution guarantee that no two neighbor are eating simultaneously. However, the above solution in not 100% correct. Suppose that all five philosopher take their left chopstick simultaneously. Now none of them can’t aequire their right chopstick. So there will be deadlock condition. The deadlock problem can be solved by applying one of the following method. Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up his chopstick if and only if both of them are available. Use an asymmetric solution, Here the odd philosopher picks up first his left chopstick and then his right chopstick where and even philosopher picks up his right chopstick and then his left chopstick. ‘The solution given in below figure is correct and also allows the maximum parallelism for an arbitrary number of philosopher. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)sideline N 5 #define LEFT (HN-1)%N define RIGHT (+1) %N #tdefine THINKING 0 #define HUNGRY 1 define EATING 2 typedef int semaphore; Int state{N); semaphore mutex = 1; semaphore s{N]; void philosopher(int i) while (TRUE) { think( ); take_torks(i); eat); put_torks(}); } void take_forks(int i) { down(&mutex); state{i] « HUNGRY; testi); up(&mutex); down(&sfi)); } void put_forks(i) { down(&mutex); state[i] = THINKING; test(LEFT); test(RIGHT); up(&mutex); } void test(i) { if (state{i] = stateli] = EATING: /* number of philosophers */ /* number of i's left neighbor */ /* number of I's right neighbor */ /* philosopher is thinking */ /* philosopher is trying to get forks */ /* philosopher is eating */ /* semaphores are a special kind of int */ /* array to keep track of everyone's state */ /* mutual exclusion for critical regions */ /* one semaphore per philosopher */ /* i; philosopher number, from 0 to N-1 */ /* repeat forever */ /* philosopher is thinking */ /* acquire two forks or block */ /* yum-yum, spaghetti */ /* put both forks back on table */ /* i: philosopher number, from 0 to N~1 */ /* enter critical region */ /* record fact that philosopher | is hungry */ J* try to acquire 2 forks */ /* exit critical region */ /* block if forks were not acquired */ /* i. philosopher number, from 0 to N-1 */ /* enter criticai region */ /* philosopher has finished eating */ J+ see if left neighbor can now eat */ /* see if right neighbor can now eat */ /* exit critical region */ /* i: philosopher number, from 0 to N-1 #/ HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING) { ‘Compiled By: Bhesh Thapa(bheeshmathapa@email.com)Readers / Writers Problem ‘© This problem is useful for modelling processes that are competing for access to a shared dtabase. A data object is to be shared among several concurrent processes. Some of these processes may want to only to read the content of the shared object, whereas others may want to update the shared object. + We distinguish between these two types of processes by referring to those processes that are interested in only reading as readers, and to the rest as writers, Obviously, if two readers access the shared data object simultaneously, no adverse effects will result. However, if a writer and some other process access the shared data object simultaneously, problem may occur. © Consider a big database, such as an airline reservation system with many processes reading the database at the same time, but if one process is writing/modif ying the database, no other processes may have access to the database not even reader. © One solution using semaphore is given below. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)typedef int semaphore; semaphore mutex = 1; semaphore db = 1; int re = 0; void reader(void) while (TRUE) { down(&mutex); to =rc+ 1; if (fc == 1) down(&db); up(&mutex); read_data_base( ); down(&mutex); te=rc-1; if (rc == 0) up(&db); up(&mutex); use_data_read(); void writer(void) while (TRUE) { think _up_data(); down(&db); write_data_ base( ) up(&db); /* use your imagination */ /* controls access to rc’ */ /* controls access to the database +/ /* # of processes reading or wanting to +/ /* repeat forever */ /* get exclusive access to 'c’ */ /* one reader more now +/ / if this is the first reader ... */ /* release exclusive access to ‘rc’ +/ /* access the data */ /= get exclusive access to ‘rc’ */ /* one reader fewer now */ / if this is the last reader... */ /* release exclusive access to ‘rc’ +/ /* noncritical region +/ /* repeat forever */ /* noncritical region */ /* get exclusive access */ / update the data +/ /* release exclusive access */ Figure 1. A solution to the readers and writers problem. * In this solution, the first reader to get access to the database does down operation on the semaphore db. Subsequent readers increment a counter re, As readers leave, they decrement the counter and the last one does an UP operation on the semaphore, allowing a blocked writer, if there is one, to get in. Compiled By: Bhesh Thapa(bheeshmathapa@amail.com)3. Sleeping Barber Problem ‘© There is one barber in the barber shop, one barber chair and n chairs for waiting customers. + If there are no customers, the barber sits down in the barber chair and fall asleep, + Anarriving customer(first customer) must wake the barber. © Subsequent arriving customers take a waiting chair if any are empty or leave if all chairs are full. ‘© The problem is to program the barber and the costumers without getting into race conditions. © We can solve this problem using three semaphore. Semaphore Customer: Counts waiting customer Semaphore Barber: Counts number of barber who are idle waiting for customer Semaphore Mutex: which is used for mutual exclusion © We also need a variable waiting, which counts the number of waiting customer. * So.using these, one solution is shown in figure below Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)define CHAIRS 5 /* # chairs for waiting customers */ typedef int semaphore; /* use your imagination */ semaphore customers = 0; J+ # of customers waiting for service */ semaphore barbers = 0; J+ # of barbers waiting for customers */ semaphore mutex = 1; /* for mutual exclusion */ int waiting = 0; /* customers are waiting (not being cut) */ you barteri while (TRUE) { down(&customers); J+ goto sleep it # of customers is 0 +/ down(&mutex); /* acquire access to ‘waiting’ */ waiting = waiting ~ 1 /* decrement count of waiting customers */ up(&barbers); J+ one barber is now ready to cut hair +/ up(&mutex); /* release Waiting’ */ ‘cut_hair(); /* cut hair (outside critical region) */ } 1 void customer(void) { down(&mutex); /* enter critical region */ if (waiting < CHAIRS) { J+ if there are no free chairs, leave */ waiting = waiting + 1; _/* increment count of waiting customers */ up(&customers); /+ wake up barber if necessary */ up(&mutex); J+ release access to waiting’ */ down(&barbers); I* go to sleep it # of free barbers is 0 +/ ‘get_haircut(); /* be seated and be serviced */ Jelse { up(&mutex); J+ shop is full; do not wait */ } Solution to sleeping barber probler In above solution, when the barber first arrives in the morning, he executes the procedure barber, causing him to block on the semaphore customers until somebody arrive. He then goes to sleep. When first customer arrives, he executes procedure customer, starting by acquiring mutex to enter critical section, If another customer enters shortly thereafter, he can’t do anything until the first one releases the mutex. ‘The customer then checks to see if the number of waiting customers is less than the number of chairs. If not, he release mutex and leave without haircut. If there is an available chair, the customer increment the integer variable waiting, Then he does an UP on the semaphore customer, thus waking up the barber. At this point both barber and customer are awake. When customer release mutex, the barber grabs it, and begin cutting hair. ‘When haricut is over, the customer exists the procedure and leaves the shop. Compiled By: Bhesh Thapa(bheeshmathapa@email.com)Analyze this statement You can't get a job without experience; you can't get experience without a job. Deadlock Asset of process is deadlock if each process is waiting for an event that only another process in the same set can cause, ‘© Because all the processes are waiting, none of them ever cause any of the events that could wake up any of the other member of the set, and all the processes continue to wait forever. © Example Fig: Resource Allocation Graph Let us consider a system consists of two process (pl and p2) and two resources (RI and R2). If both Process need both resources to complete their task, Example: Reading from scanner(R1) and Printing from Printer (R2), then two process will enter to a state called deadlock and wait forever if i Process I Request Resource! and is granted ii, Process 2 request Resource? and is granted. iii, Process I request resource? and is queued up, pending the release of resource? by Process 2 iv. Process 2 request Resource! and is queued up, pending the release of Resourcel by Process! Preemptable and Non Preemptable Resources © Apreemptable resource is one that can be taken away from the process that holding it without caus any effect ‘+ Anon preemptable resource is one that can’t be taken away from its current owner. ‘* Memory is an example of preemptable resource. If a system consists of 512K memory and a printer, with two process each of 512K that want to print something. Then, Proces! may request Printer and Memory then start printing. If time quantum of process 1 exceeds before completing the printing, then the process! may be swapped out from memory and new process, Process 2 may be swapped in. © Printer is an example of non preemptable resource. If a process has already began to print output then taking printer away from it and giving it to another process will produce undesired or wrong output. Compiled By: Bhesh Thapa(bheeshmathapa@mall.com)Conditions for Deadlock ‘For a given set of process, we can say a deadlock is said to occur if all of the following four conditions occur simultaneously, 1. Mutual exelusion Only one provess ata time can use a resource, 2. Holland wait Process holding at least one resource is waiting to acquire additional resources held by other processes. 3. No preemption Resources are released only voluntarily by the process holding the resource, after the process is finished with it 4. Circular wait There exists a set {PI ..., Ph } of waiting processes. 1 is waiting for a resource that is held by P2 P2 is waiting for a resource that is held by P3 Ph is waiting for a resource that is held by Pl All of these four conditions must be present for a deadlock to occur, If one or more of these conditions is absent, no Deadlock is possible ‘Deadlock Modelling ‘+ Above four conditions can be modeled using graph called resource allocation graph or directed graph. ‘In this graph, circle are used to represent process, and square are used to represent resource. * Anare from square(Resources) to circle(Process) means that the resource previously requested has been granted and now holding by that process. © Similarly, an are from circle (Process) to square (Resource) means that the process is currently requesting and waiting for the resource. a om Os ewes : x If there is no cycle in a graph, then there is no deadlock but If there is cyele then, i, Teach resource type has 1 instance, then deadlock is said to occur Compiled By: Bhesh Thapa(bheeshmathapa@amail.com)
You might also like
Operating System Processes and CPU Scheduling
PDF
No ratings yet
Operating System Processes and CPU Scheduling
39 pages
(OS) Unit-2.1 Process
PDF
No ratings yet
(OS) Unit-2.1 Process
22 pages
Lecture3 Processes Threads
PDF
No ratings yet
Lecture3 Processes Threads
27 pages
POS-UNIT-II (1)
PDF
No ratings yet
POS-UNIT-II (1)
31 pages
3 Processes
PDF
No ratings yet
3 Processes
53 pages
Chapter Two OS
PDF
No ratings yet
Chapter Two OS
23 pages
Oslecture3 4
PDF
No ratings yet
Oslecture3 4
47 pages
Chapter 2 Process.pptx
PDF
No ratings yet
Chapter 2 Process.pptx
51 pages
UNIT - 2 Part1
PDF
No ratings yet
UNIT - 2 Part1
53 pages
Process Concept
PDF
No ratings yet
Process Concept
25 pages
Chapter - 2 Os
PDF
No ratings yet
Chapter - 2 Os
40 pages
unit 2 - process management
PDF
No ratings yet
unit 2 - process management
28 pages
OS_Unit-2
PDF
No ratings yet
OS_Unit-2
85 pages
Module 2
PDF
No ratings yet
Module 2
118 pages
Module - II KTU
PDF
No ratings yet
Module - II KTU
18 pages
Unit 2
PDF
No ratings yet
Unit 2
23 pages
M2S1-SUPPLEMENTARY1
PDF
No ratings yet
M2S1-SUPPLEMENTARY1
9 pages
UNIT2
PDF
No ratings yet
UNIT2
42 pages
Lecture 1
PDF
No ratings yet
Lecture 1
67 pages
Lecture 5-6 ch3 Precesses
PDF
No ratings yet
Lecture 5-6 ch3 Precesses
27 pages
processs management
PDF
No ratings yet
processs management
33 pages
R22 Unit - II Final Material OS
PDF
No ratings yet
R22 Unit - II Final Material OS
171 pages
Process Management
PDF
100% (1)
Process Management
43 pages
Os Process and Memory Management
PDF
No ratings yet
Os Process and Memory Management
55 pages
Unit 2
PDF
No ratings yet
Unit 2
48 pages
Operating Systems
PDF
No ratings yet
Operating Systems
25 pages
Chapter 3: Processes: Silberschatz, Galvin and Gagne ©2018 Operating System Concepts - 10 Edition
PDF
No ratings yet
Chapter 3: Processes: Silberschatz, Galvin and Gagne ©2018 Operating System Concepts - 10 Edition
85 pages
3. Proceset
PDF
No ratings yet
3. Proceset
45 pages
Lecture Process
PDF
No ratings yet
Lecture Process
51 pages
Processes
PDF
No ratings yet
Processes
21 pages
1 Process Scheduling 1684053744638
PDF
No ratings yet
1 Process Scheduling 1684053744638
40 pages
Operating System Cha 2
PDF
No ratings yet
Operating System Cha 2
53 pages
oslecture2.ppt
PDF
No ratings yet
oslecture2.ppt
47 pages
MODULE-2
PDF
No ratings yet
MODULE-2
76 pages
Unit 2 Process.pptx (1)
PDF
No ratings yet
Unit 2 Process.pptx (1)
94 pages
Operating System: Concurrent Process
PDF
No ratings yet
Operating System: Concurrent Process
59 pages
04 Procesos
PDF
No ratings yet
04 Procesos
10 pages
Unit 2
PDF
No ratings yet
Unit 2
65 pages
Pocess Manament Tread PDF
PDF
No ratings yet
Pocess Manament Tread PDF
62 pages
Processes
PDF
No ratings yet
Processes
17 pages
UNIT - 3 - Process Concepts
PDF
No ratings yet
UNIT - 3 - Process Concepts
30 pages
04. Processes Threads
PDF
No ratings yet
04. Processes Threads
44 pages
Chapter-2 Process Management
PDF
No ratings yet
Chapter-2 Process Management
56 pages
#OS Lecture Note 2 Process Management-1
PDF
No ratings yet
#OS Lecture Note 2 Process Management-1
83 pages
Lect 2
PDF
No ratings yet
Lect 2
43 pages
Presentation 27584 Content Document 20241118101754PM (2)
PDF
No ratings yet
Presentation 27584 Content Document 20241118101754PM (2)
94 pages
Process Concepts
PDF
No ratings yet
Process Concepts
27 pages
PPT-Unit-3-Process Management
PDF
100% (1)
PPT-Unit-3-Process Management
63 pages
Unit-2 Process Management
PDF
No ratings yet
Unit-2 Process Management
27 pages
Lecture 2 - Processes
PDF
No ratings yet
Lecture 2 - Processes
55 pages
Process_acs
PDF
No ratings yet
Process_acs
115 pages
4 OS Processes
PDF
No ratings yet
4 OS Processes
26 pages
Chapter2 - Processes and Threads
PDF
No ratings yet
Chapter2 - Processes and Threads
96 pages
PROCESS and STATES (Autosaved) (Autosaved)
PDF
No ratings yet
PROCESS and STATES (Autosaved) (Autosaved)
21 pages
Lecture2 OS Processes and Threads
PDF
No ratings yet
Lecture2 OS Processes and Threads
52 pages
Operating System Process Management
PDF
No ratings yet
Operating System Process Management
91 pages
3 Processes
PDF
No ratings yet
3 Processes
73 pages
Operating System
PDF
No ratings yet
Operating System
43 pages
Os Lec 4 Process
PDF
No ratings yet
Os Lec 4 Process
7 pages