Q.1. Explain Process, PCB and Process State Diagram. Ans. Process
Q.1. Explain Process, PCB and Process State Diagram. Ans. Process
Q.1. Explain Process, PCB and Process State Diagram. Ans. Process
Process In computing, a process is an instance of a computer program, consisting of one or more threads, that is being sequentially executed by a computer system that has the ability to run several computer programs concurrently. Process states The simple process state diagram below shows three possible states for a process. They are shown as ready (the process is ready to execute when a processor becomes available), running (the process is currently being executed by a processor) and blocked (the process is waiting for a specific event to occur before it can proceed). The lines connecting the states represent possible transitions from one state to another. At any instant, a process will exist in one of these three states. On a single-processor computer, only one process can be in the running state at any one time. The remaining processes will either be ready or blocked, and for each of these states there will be a queue of processes waiting for some event.
Ans. Interrupt
In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.
An act of interrupting is referred to as an interrupt request (IRQ). The part of a program (usually firmware, driver or operating system service) that deals with the interrupt is referred to as an interrupt service routine (ISR) or interrupt handler. Interrupts can be categorized into: maskable interrupt, non-maskable interrupt (NMI), inter-processor interrupt (IPI), software interrupt, and spurious interrupt.
Maskable interrupt (IRQ) is a hardware interrupt that may be ignored by setting a bit in an interrupt mask register's (IMR) bit-mask. Non-maskable interrupt (NMI) is a hardware interrupt that lacks an associated bitmask, so that it can never be ignored. NMIs are often used for timers, especially watchdog timers. Inter-processor interrupt (IPI) is a special case of interrupt that is generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt is an interrupt generated within a processor by executing an instruction. Software interrupts are often used to implement system calls because they implement a subroutine call with a CPU ring level change. Spurious interrupt is a hardware interrupt that is unwanted. They are typically generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware.
Q.4. what is file system? Ans. File System A file system (filesystem) is means to organize data expected to be retained after a program terminates by providing procedures to store, retrieve and update data, as well as manage the available space on the device(s) which contain it. A file system organizes data in an efficient manner and is tuned to the specific characteristics of the device. There is usually a tight coupling between the operating system and the file system. Some filesystems provide mechanisms to control access to the data and metadata. Insuring reliability is a major responsibility of a filesystem. Some filesystems provide a means for multiple programs to update data in the same file nearly at the same time. Without a filesystem programs would not be able to access data by file name or directory and would need to be able to directly access data regions on a storage device. File systems are used on data storage devices such as magnetic storage disks or optical discs to maintain the physical location of the computer files. They may provide access to data on a file server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they
may be virtual and exist only as an access method for virtual data (e.g., procfs). This is distinguished from a directory service and registry. Q.5. Explain the concept of fragmentation. Ans. Fragmentation (1) Refers to the condition of a disk in which files are divided into pieces scattered around the disk. Fragmentation occurs naturally when you use a disk frequently, creating, deleting, and modifying files. At some point, the operating system needs to store parts of a file in noncontiguous clusters. This is entirely invisible to users, but it can slow down the speed at which data is accessed because the disk drive must search through different parts of the disk to put together a single file. In DOS 6.0 and later systems, you can defragment a disk with the DEFRAG command. You can also buy software utilities, called disk optimizers or defragmenters, that defragment a disk. (2) Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This is called external fragmentation. With modern operating systems that use a paging scheme, a more common type of RAM fragmentation is internal fragmentation. This occurs when memory is allocated in frames and the frame size is larger than the amount of memory requested. Q.6. Explain the work of acrylic graph. Ans. Acyclic graph OPEN Acrylics are a new line of colors and mediums for professional artists designed with a unique set of working properties that represent a true departure from all other acrylics on the market and that dramatically expand the range of techniques that are available to artists who prefer to use acrylics. OPEN Acrylics are formulated with an optimum balance of pigment load and 100% acrylic polymer dispersion to produce a paint with a uniquely relaxed set of working characteristics and a versatility that allows artists to explore a wider range of techniques such as portraiture and landscape painting that rely on softening, shading, glazing, and creating fine detail. Q.7. what is the page fault? What are the steps, which are taken when a page fault occurs, explain with suitable examples. Page Fault A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. In the typical case the operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in the case of an illegal access. The hardware that detects a page fault is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of the operating system.
When a page fault occurs, the hardware cannot do anything else with the instruction that caused the page fault and thus it must transfer control to an operating system routine (this is the page fault handler). The page fault handler must then decide how to handle the page fault. It can do one of two things: It can decide the virtual address is just simply not valid. In this case, Windows will report this error back by indicating an exception has occurred (typicallySTATUS_ACCESS_VIOLATION) It can decide the virtual address is valid. In this case, Windows will find an available physical page, place the correct data in that page, update the virtual-to-physical page translation mechanism and then tell the hardware to retry the operation. When the hardware retries the operation it will find the page translation and continue operations as if nothing had actually happened.
Q.8. what is thrashing? Ans. Thrashing Thrashing is computer activity that makes little or no progress, usually because memory or other resources have become exhausted or too limited to perform needed operations. When this happens, a pattern typically develops in which a request is made of the operating system by a process or program, the operating system tries to find resources by taking them from some other process, which in turn makes new requests that can't be satisfied. In a virtual storage system (an operating system that manages its logical storage or memory in units called pages), thrashing is a condition in which excessive paging operations are taking place. Q.9. what are threats? Explain its types. Ans. In order to design a security system, it is important to know what the potential threats are so that appropriate counter-measures can be taken. It is impossible to predict every potential threat that may exist to a system, but it is also far too expensive and impractical to plan a security system that will protect against every conceivable threat. It is an axiom of computer security that the only completely secure computer is the one that has never been turned on. The kinds of threats that exist for systems vary, depending on the type of system that is deployed. I am most interested in focusing on systems that exchange information with other systems outside of their administrative domain, that are accessible from multiple physical sites, and from multiple access points. Types of Threats
Worms
This malicious program category largely exploits operating system vulnerabilities to spread itself. The class was named for the way the worms crawl from computer to computer, using networks and e-mail. This feature gives many worms a rather high speed in spreading themselves.
Viruses
Programs that infected other programs, adding their own code to them to gain control of the infected files when they are opened. This simple definition explains the fundamental action performed by a virus - infection.
Trojans
Programs that carry out unauthorized actions on computers, such as deleting information on drives, making the system hang, stealing confidential information, etc. This class of malicious program is not a virus in the traditional sense of the word (meaning it does not infect other computers or data). Trojans cannot break into computers on their own and are spread by hackers, who disguise them as regular software. The damage that they incur can exceed that done by traditional virus attacks by several fold.
Spyware Software that collects information about a particular user or organization without their knowledge. You might never guess that you have spyware installed on your computer. Riskware Potentially dangerous applications include software that has not malicious features but could form part of the development environment for malicious programs or could be used by hackers as auxiliary components for malicious programs. Rootkits Utilities used to conceal malicious activity. They mask malicious programs to keep anti-virus programs from detecting them. Rootkits modify the operating system on the computer and alter its basic functions to hide its own existence and actions that the hacker undertakes on the infected computer.
The set of all logical addresses generated by a program is known as Logical Address Space,whereas the set of all physical addresses corresponding to these logical addresses is Physical Address Space.Now, the run time mapping from virtual address to physical address is done by a hardware device known as Memory Management Unit.Here in the case of mapping the base register is known as relocation register.The value in the relocation register is added to the address generated by a user process at the time it is sent to memory.Let's understand this situation with the help of example:If the base register contains the value 1000,then an attempt by the user to address location 0 is dynamically
Par-B
Q.1.What is an operating system? What are the different services provided by operating system? Ans.s
An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. The other programs are called applications or application programs. The application programs make use of the operating system by making requests for services through a defined application program interface (API). In addition, users can interact directly with the operating system through a user interface such as a command language or a graphical user interface (GUI ).An operating system performs these services for applications:
In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn. It manages the sharing of internal memory among multiple applications. It handles input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports. It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred. It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work. On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time.
All major computer platforms (hardware and software) require and sometimes include an operating system. Linux, Windows 2000, VMS, OS/400, AIX, and z/OS are all examples of operating systems. Services Provided By the OS
Operating system (OS) provides two main services: file management and user interface with the hardware system.
Following are the five services provided by an operating systems to the convenience of the users.
Program Execution The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems. Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.
I/O Operations
Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs. For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.
File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task. This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.
Communications
There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.
Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning. This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.
Q.2. what is device management? Explain disk-scheduling techniques. Ans. Device Management Security devices are increasingly deployed throughout enterprise networks, rather than just at the perimeter. It's more than a full-time job for the security team of any enterprise to map organizational security policies to the detailed configuration of those devices, in addition to ensuring that the configurations remain constant while needs evolve. BT customers who subscribe to our Managed Security Monitoring have the option of outsourcing all aspects of IDS, IPS, and firewall management to us. BT's Device Management service allows your limited staff resources more time to focus on defining and executing strategic vision without getting caught in the myriad technical details of a particular vendor's product. Device Management is about implementing configurations in the best interests of the customer, proactively, so that devices are always providing maximum protection and surveillance. That's why BT's SLA offers unlimited changes to devices when they are initiated by BT directly. This includes new signatures and updates from the vendor, and configuration changes BT recommends based on observations from hundreds of networks and thousands of devices around the world. Disk Scheduling The processes running on a machine may have multiple outstanding requests for data from the disk. In what order should requests be served? First-Come-First-Served. This is how nachos works right now. As processes arrive, they queue up for the disk and get their requests served in order. In current version of nachos, queueing happens at the mutex lock. What is wrong with FCFS? May have long swings from one part of disk to another. It makes sense to service outstanding requests from adjacent parts of disk sequentially. Shortest-Seek-Time-First. Disk scheduler looks at all outstanding disk requests, and services the one closest to where the disk head currently is. Sort of like Shortest-Job-First task scheduling. What is the problem with SSTF? Starvation. A request for a remote part of the disk may never get serviced.
Q.3. what is demand paging and how we cover the page fault technique? Ans. Demand paging In computer operating systems, demand paging (as opposed to anticipatory paging) is an application of virtual memory. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it (i.e., if a page fault occurs). It follows that a process begins execution with none of its pages in physical memory, and many page faults will occur until most of a process's working set of pages is located in physical memory. This is an example of lazy loading techniques.
Page fault
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. In the typical case the operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in the case of an illegal access. The hardware that detects a page fault is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of the operating system. Contrary to what the name 'page fault' might suggest, page faults are not errors and are common and necessary to increase the amount of memory available to programs in any operating system that utilizes virtual memory, including Microsoft Windows, Unix-like systems (including Mac OS X, Linux, *BSD, Solaris, AIX, and HP-UX), and z/OS. Microsoft uses the term hard fault in more recent versions of the Resource Monitor (e.g., Windows Vista) to mean 'page fault Minor page fault If the page is loaded in memory at the time the fault is generated, but is not marked in the memory management unit as being loaded in memory, then it is called a minor or soft page fault. The page fault handler in the operating system merely needs to make the entry for that page in the memory management unit point to the page in memory and indicate that the page is loaded in memory; it does not need to read the page into memory. This could happen if the memory is shared by different programs and the page is already brought into memory for other programs. The page could also have been removed from a process Working Set, but not yet written to disk or erased, such as in operating systems that use Secondary Page Caching. For example, HP OpenVMS may remove a page that does not need to be written to disk (if it has remained unchanged since it was last read from disk, for example) and place it on a Free Page List if the working set is deemed too large. However, the page contents are not overwritten until the page is assigned elsewhere, meaning it is still available if it is referenced by the original process before being allocated. Since these faults do not involve disk latency, they are faster and less expensive than major page faults.
If the page is not loaded in memory at the time the fault is generated, then it is called a major or hard page fault. The page fault handler in the operating system needs to find a free page in memory, or choose a page in memory to be used for this page's data, write out the data in that page if it hasn't already been written out since it was last modified, mark that page as not being loaded into memory, read the data for that page into the page, and then make the entry for that page in the memory management unit point to the page in memory and indicate that the page is loaded in memory. Major faults are more expensive than minor page faults and add disk latency to the interrupted program's execution. This is the mechanism used by an operating system to increase the amount of program memory available on demand. The operating system delays loading parts of the program from disk until the program attempts to use it and the page fault is generated.
Invalid page fault
If a page fault occurs for a reference to an address that's not part of the virtual address space, so that there can't be a page in memory corresponding to it, then it is called an invalid page fault. The page fault handler in the operating system then needs to terminate the code that made the reference, or deliver an indication to that code that the reference was invalid. A null pointer is usually represented as a pointer to address 0 in the address space; many operating systems set up the memory management unit to indicate that the page that contains that address is not in memory, and do not include that page in the virtual address space, so that attempts to read or write the memory referenced by a null pointer get an invalid page fault. Q.4. Explain different types of CPU scheduling techniques. CPU Scheduling In mono-tasking operating systems the issue of scheduling is trivial: after the system has set up the execution environment of a process, CPU control is given to it until the process itself exits. In a pun, the system is not operating at all during the program's execution, save for providing services through subroutine calls. It's only with multi-tasking operating systems that scheduling becomes a top entry in a designer's agenda. In many multitasking systems the processor scheduling subsystem operates on three levels, differentiated by the time scale at which they perform their operations. In this sense we differentiate among:
Long term scheduling: which determines which programs are admitted to the system for execution and when, and which ones should be exited. Medium term scheduling: which determines when processes are to be suspended and resumed; Short term scheduling (or dispatching): which determines which of the ready processes can have CPU resources, and for how long.
Taking into account the states of a process, and the time scale at which state transition occur, we can immediately recognise that
dispatching affects processes o running; o ready; o blocked; the medium term scheduling affects processes o ready-suspended; o blocked-suspended; the long term scheduling affects processes o new; o exited
Long term scheduling obviously controls the degree of multiprogramming in multitasking systems, following certain policies to decide whether the system can honour a new job submission or, if more than one job is submitted, which of them should be selected. The need for some form of compromise between degree of multiprogramming and throughput seems evident, especially when one considers interactive systems. The higher the number of processes, in fact, the smaller the time each of them may control CPU for, if a fair share of responsiveness is to be given to all processes. Moreover we have already seen that a too high number of processes causes waste of CPU time for system housekeeping chores (trashing in virtual memory systems is a particularly nasty example of this). However, the number of active processes should be high enough to keep the CPU busy servicing the payload (i.e. the user processes) as much as possible, by ensuring that - on average - there always be a sufficient number of processes not waiting for I/O. Simple policies for long term scheduling are
Simple First Come First Served (FCFS): it's essentially a FIFO scheme. All job requests (e.g. a submission of a batch program, or an user trying to log in in a time shared system) are honoured up to a fixed system load limit, further requests being refused tout court, or enqueued for later processing. Priority schemes. Note that in the context of long term scheduling ``priority'' has a different meaning than in dispatching: here it affects the choice of a program to be entered the system as a process, there the choice of which ready process process should be executed.
Q.5. what is Deadlock? Explain the various conditions which detects deadlock. Ans. Deadlock
A condition that occurs when two processes are each waiting for the other to complete before proceeding. The result is that both processes hang. Deadlocks occur most commonly in multitasking and client/server environments. Ideally, the programs that are deadlocked, or the operating system, should resolve the deadlock, but this doesn't always happen.
Another example might be a text formatting program that accepts text sent to it to be processed and then returns the results, but does so only after receiving "enough" text to work on (e.g. 1KB). A text editor program is written that sends the formatter some text and then waits for the results. In this case a deadlock may occur on the last block of text. Since the formatter may not have sufficient text for processing, it will suspend itself while waiting for the additional text, which will never arrive since the text editor has sent it all of the text it has.
Meanwhile, the text editor is itself suspended waiting for the last output from the formatter. This type of deadlock is sometimes referred to as a deadly embrace (properly used only when only two applications are involved) or starvation. However, this situation, too, is easily prevented by having the text editor send a forcing message (e.g. EOF, (End Of File)) with its last (partial) block of text, which will force the formatter to return the last (partial) block after formatting, and not wait for additional text.
Necessary conditions
There are four necessary conditions for a Coffman deadlock to occur, known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.:
1. Mutual Exclusion: a resource that cannot be used by more than one process at a time 2. Hold and Wait: processes already holding resources may request new resources held by other processes 3. No Preemption: No resource can be forcibly removed from a process holding it, resources can be released only by the explicit action of the process. 4. Circular Wait: two or more processes form a circular chain where each process waits for a resource that the next process in the chain holds. When circular waiting is triggered by mutual exclusion operations it is sometimes called lock inversion.[2]
Unfulfillment of any of these conditions is enough to preclude Coffman deadlock from ever occurring. However, since the conditions are not sufficient, their mere presence does not itself imply a deadlock.
Prevention
Removing the mutual exclusion condition means that no process may have exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. The "hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. (Such algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
Avoidance
Deadlock can be avoided if certain information about processes are available in advance of resource allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states. In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requires resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.
Detection Often, neither avoidance nor deadlock prevention may be used. Instead deadlock detection and process restart are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS. Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock. Deadlock detection techniques include, but are not limited to model checking. This approach constructs a finite state-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock. Distributed deadlock prevention
Let's consider the "when two trains approach each other at a crossing" example defined above. Just-in-time prevention works like having a person standing at the crossing (the crossing guard) with a switch that will let only one train onto "super tracks" which runs above and over the other waiting train(s). Before we look into threads using just-in-time prevention, let's look into the conditions which already exist for regular locking.
For non-recursive locks, a lock may be entered only once (where a single thread entering twice without unlocking will cause a deadlock, or throw an exception to enforce circular wait prevention). For recursive locks, only one thread is allowed to pass through a lock. If any other threads enter the lock, they must wait until the initial thread that passed through completes n number of times it has entered.
So the issue with the first one is it does no deadlock prevention at all. The second doesn't do distributed deadlock prevention. But the 2nd one is redefined to prevent a deadlock scenario the first one doesn't address. And the only other scenario I am aware of that may cause deadlocks is when two or more lockers lock on each other. So why not expand the definition above one more time? Well, we can, if we use add a variable to the recursive lock condition which guarantees that at least one thread runs among all locksdistributed deadlock prevention. And just like having a super track in the train example, I use "super thread" in this locking example.
Recursively, only one thread is allowed to pass through a lock. If other threads enter the lock, they must wait until the initial thread that passed through completes n number of times. But if the number of threads that enter locking equal the number that are locked, assign one thread as the super-thread, and only allow it to run (tracking the number of times it enters/exits locking) until it completes.
After a super-thread is finished, the condition changes back to using the logic from the recursive lock, and the exiting super-thread
1. sets itself as not being a super-thread 2. notifies the locker that other locked, waiting threads need to re-check this condition
If a deadlock scenario exists, set a new super-thread and follow that logic. Otherwise, resume regular locking.
Issues not addressed above
A lot of confusion revolves around the halting problem. But this logic in-no-way solves the halting problem. This is because we know and control the conditions in which locking occurs, giving us a specific solution (instead of the otherwise required general solution the halting problem requires). Still this locker prevents all deadlocked! Well, it does when only considering locks using this logic. But if it is used with other locking mechanisms, a lock that is started never unlocks (e.g. exception thrown jumping out without unlocking, looping indefinitely within a lock, or coding error forgetting to call unlock), deadlocking is very much possible. And to increase our condition to include these would require solving the halting issue, since we would be dealing with conditions we know nothing about and are unable to change. Another issue is that this doesn't address the temporary deadlocking issue (not really a deadlock, but a performance killer), where two or more threads lock on each other while another unrelated threads is running. These temporary deadlocks could have a thread running exclusively within them, increasing parallelism. But because of how the distributed deadlock detection works for all locks, and not subsets therein, the unrelated running thread must complete before performing the super-thread logic to remove the temporary deadlock. I hope you see the temporary live-lock scenario in the above. If another unrelated running thread begins before the first unrelated thread exits, another duration of temporary deadlocking will occur. And if this happens continuously (extremely rare), the temporary deadlock can be extended until right before the program exits, when the other unrelated threads are guaranteed to finish (because of the guarantee that one thread will always run to completion).
Further expansion
This can be further expanded to involve additional logic to increase parallelism where temporary deadlocks might otherwise occur. But for each step of adding more logic, we add more overhead. A couple of examples include: expanding distributed super-thread locking mechanism to consider each subset of existing locks; Wait-For-Graph (WFG) algorithms, which track all cycles that cause deadlocks (including temporary deadlocks); and heuristics algorithms which don't necessarily increase parallelism in 100% of the places that temporary deadlocks are possible, but instead compromise by solving them in enough places that performance/overhead vs parallelism is acceptable (e.g. for each processor available, work towards finding deadlock cycles less than the number of processors + 1 deep).
Q.6. Explain the following with suitable examples. i. The Critical Section Problem
Critical Section
set of instructions that must be controlled so as to allow exclusive access to one process execution of the critical section by processes is mutually exclusive in time
Critical Section (S&G, p. 166) (for example, ``for the process table'')
repeat
ii.
Banker's algorithm
The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. The algorithm was developed in the design process for the THE operating system and originally described (in Dutch) in EWD108. The name is by analogy with the way that bankers account for liquidity constraints.
iii.
Virtual memory
In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture's various forms of computer data storage (such as random-access memory and disk storage), allowing a program to be designed as though there is only one kind of memory, "virtual" memory, which behaves like directly addressable read/write memory (RAM). Most modern operating systems that support virtual memory also run each process in its own dedicated address space, allowing a program to be designed as though it has sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
use hardware memory more efficiently than systems without virtual memory. make the programming of applications easier by: