Unit 2 QSTN and Answer
Unit 2 QSTN and Answer
2 Marks Questions:
3. Define Semaphore
9. What is thread?
5 Marks Questions:
6. Write short notes on: a) Multi-level queue scheduling b) Multi-level feedback queue
scheduling
12 Marks Questions:
7. The main difference between a binary semaphore and a counting semaphore is that a
binary semaphore has only two states (0 and 1), while a counting semaphore can have
any value greater than or equal to 0.
8. Two necessary conditions for a deadlock are:
b) Hold and wait: A process holding at least one resource is waiting to acquire additional
resources held by other processes.
9. A thread is a lightweight process that can run concurrently with other threads within the
same process. Threads share the same memory space and resources as their parent
process and can improve performance by allowing multiple tasks to be performed
simultaneously.
10. The main difference between pre-emptive and non-pre-emptive scheduling is that in
pre-emptive scheduling, the operating system can interrupt a running process and
allocate the CPU to another process based on priority, while in non-pre-emptive
scheduling, a running process must voluntarily release the CPU.
11. Context switching is the process of saving the state of a process or thread so that it can
be resumed later and restoring the state of another process or thread so that it can be
executed. Context switching occurs when the operating system switches from one
process or thread to another.
12. Short-term scheduler, also known as CPU scheduler, is responsible for selecting which
process from the ready queue will be allocated the CPU next. The short-term scheduler
is invoked frequently, typically every few milliseconds, and is responsible for making
efficient use of available CPU time.
13. The dispatcher is responsible for loading the process into memory and starting its
execution. The dispatcher is invoked by the CPU scheduler and performs context
switching between processes.
14. A race condition is a situation in which the behavior of a program depends on the
relative timing of events, such as the order in which multiple threads access a shared
resource. A race condition can lead to unpredictable or incorrect behavior of the
program.
15. The critical-section problem is a problem that occurs when multiple processes or
threads access a shared resource concurrently. The critical section refers to the part of
the program where the shared resource is accessed.
1. Advantages of multiprocessor systems:
Increased throughput and performance due to the ability to execute multiple tasks
simultaneously
Increased reliability and fault tolerance as multiple processors can handle failures or
errors
Improved resource utilization as tasks can be allocated to processors with available
resources
Scalability, as additional processors can be added to the system to handle increased
workload.
In this example, the PCB is for a process with ID 1234 that is currently running in the
system. Its priority level is set to High, and the program counter is pointing to memory
location 0x12345678. The PCB also contains information about the process's register
values and memory limits.
5. Job scheduler and CPU scheduler are two types of schedulers used by an operating
system to manage processes in the system.
Job Scheduler: It is responsible for selecting which job or process to run next from the
pool of waiting processes. The job scheduler usually follows a set of rules, such as first-
come, first-served (FCFS) or shortest job first (SJF) to determine the next process to
execute.
CPU Scheduler: It is responsible for selecting which process to run next on the CPU. The
CPU scheduler's main objective is to maximize CPU utilization and throughput while
minimizing the turnaround time and waiting time of processes. There are several
algorithms that the CPU scheduler can use, such as Round Robin, Priority Scheduling,
and Shortest Remaining Time First.
6.
For example, if the quantum is set to 10 milliseconds and there are three processes in
the queue, the scheduler will allow each process to run for 10 milliseconds before
preempting it and moving it to the end of the queue. The scheduler will continue this
process until all processes have completed.
New -> Ready -> Running -> Blocked -> Ready -> Terminated (1) New: The process is
created but has not yet been admitted to the system. (2) Ready: The process is waiting
for the CPU to be allocated for execution. (3) Running: The process is currently executing
on the CPU. (4) Blocked: The process is waiting for some event, such as input/output or
a semaphore signal, to occur. (5) Terminated: The process has completed its execution
and has been removed from the system.
For example, consider a system with three processes (P1, P2, and P3) and three
resources (R1, R2, and R3). Suppose P1 is holding R1, P2 is holding R2, and P3 is holding
R3. Additionally, P1 is requesting R3, P2 is requesting R1, and P3 is requesting R2. The
corresponding resource allocation graph for this scenario is as follows:
luaCopy code
P1 --> R1 / \ / \ P2 --> R2 <-- P3 \ / \ / R3
This graph shows that there is a cycle in the allocation and request relationships, which
indicates a potential deadlock.
a) Resource allocation denial: In this method, the system prevents deadlock by not
allocating resources to a process if it believes that the allocation may lead to a deadlock.
This can be done by requiring processes to declare their maximum resource needs in
advance, and then checking whether the allocation of resources will result in a deadlock.
If so, the allocation is denied.
b) Process termination: In this method, the system monitors the state of processes and
terminates a process if it is likely to cause a deadlock. This can be done by detecting
processes that are holding resources but are unable to make progress because they are
waiting for additional resources that are held by other processes. By terminating such
processes, the system can prevent the deadlock from occurring.
10. The Banker's algorithm is a deadlock-avoidance algorithm that is used to ensure that a
system does not enter a deadlock state. The algorithm works by keeping track of the
maximum resource needs of each process, the number of resources currently available,
and the number of resources being used by each process.
For example, suppose there are four resources (R1, R2, R3, and R4) and five processes
(P1, P2, P3, P4, and P5), each with different resource needs and current resource
allocations. The maximum resource needs for each process are as follows:
luaCopy code
Process R1 R2 R3 R4 ------------------------------------- P1 3 1 2 1 P2 2 2 1 2 P3 1 3 1 3 P4 2 2 2 1 P5 1
1 3 2
luaCopy code
Resource R1 R2 R3 R4 ----------------------------------- Allocated 1 1 1 1 Available 2 2 2 2
Using the Banker's algorithm, the system can determine whether it is safe to allocate
additional resources to a process without causing a deadlock. The algorithm works by
simulating the allocation of resources and checking whether the system can reach a safe
state where all processes can complete their execution.
11. Deadlock recovery refers to the process of resolving a deadlock that has already
occurred in a system. There are several methods for deadlock recovery, including killing
one or more processes involved in the deadlock, preempting resources from processes,
or rolling back the state of the system to a previous checkpoint.
Killing processes involved in the deadlock can be an effective way to resolve the issue,
but it can also result in data loss and disruption to the system. Preempting resources
from processes can help to break the deadlock, but it can also cause delays and result in
reduced system performance. Rolling back the state of the system to a previous
checkpoint can help to avoid the deadlock, but it can also result in data loss and other
problems.
The functions of a monitor include ensuring that only one process can access the critical
section of code at a time, preventing race conditions and other synchronization issues,
and coordinating the access to shared resources among multiple processes.
Mutual Exclusion: Only one process can access the shared resource at any given time.
This is necessary to prevent conflicts and ensure consistency.
Progress: If no process is currently accessing the shared resource and one or more
processes are trying to access it, then only those processes that are not blocked should
be considered as candidates for accessing the resource, and the selection among these
should be made in a bounded amount of time.
Bounded Waiting: A process that is waiting to access the shared resource should not
have to wait indefinitely. There should be a limit on how long a process can wait for
access.
These requirements ensure that the critical-section problem is solved in a fair and
efficient manner.
Mutual Exclusion: Only one process can access the shared buffer at any given time. This
is necessary to prevent conflicts and ensure consistency.
Full/Empty Buffer Checking: The producer process should not produce data when the
buffer is full, and the consumer process should not consume data when the buffer is
empty.
Synchronization: The producer and consumer processes must be synchronized so that
they do not interfere with each other's activities.