0% found this document useful (0 votes)
109 views10 pages

Unit 2 QSTN and Answer

The document contains 15 questions related to operating system concepts like processes, scheduling, semaphores, deadlocks, and synchronization. It covers topics such as the definition of a process, preemptive vs non-preemptive scheduling, necessary conditions for deadlocks, and functions of the dispatcher and short-term scheduler. The questions range from 2 to 12 marks and include topics that are commonly assessed in operating systems courses.

Uploaded by

Sangam Maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views10 pages

Unit 2 QSTN and Answer

The document contains 15 questions related to operating system concepts like processes, scheduling, semaphores, deadlocks, and synchronization. It covers topics such as the definition of a process, preemptive vs non-preemptive scheduling, necessary conditions for deadlocks, and functions of the dispatcher and short-term scheduler. The questions range from 2 to 12 marks and include topics that are commonly assessed in operating systems courses.

Uploaded by

Sangam Maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

OPERATING SYSTEM – UNIT II QUESTIONS

2 Marks Questions:

1. What do you mean by a process?

2. What is pre-emptive scheduling?

3. Define Semaphore

4. Define throughput and waiting time.

5. What is a deadlock situation?

6. Mention any two scheduling criteria.

7. What is the difference between a binary semaphore and a counting semaphore?

8. Give any two necessary conditions for a deadlock.

9. What is thread?

10. Differentiate between pre-emptive and non-pre-emptive scheduling.

11. What do you mean by context switching?

12. What is short term scheduler?

13. What is the function of the dispatcher?

14.. What is race condition?

15. What is the critical-section problem?

5 Marks Questions:

1. What are the advantages and disadvantages of multiprocessor system?

2. Write any three functions of process management.

3. Differentiate between pre-emptive and Non pre-emptive scheduling.

4. What is PCB? Explain briefly with a diagram.

5. Explain Job scheduler and CPU scheduler.

6. Write short notes on: a) Multi-level queue scheduling b) Multi-level feedback queue
scheduling
12 Marks Questions:

1. Explain the scheduling algorithm with an example.

2. Explain the different criteria of evaluating the scheduling algorithms.

3. Explain CPU scheduling criteria.

4. What is process? Explain process transition with diagram.

5. Explain the need for synchronization with example.

6. Explain different multithreading models.

7. Explain the Dinning philosopher problem for synchronization.

8. Explain the resource allocation graph with an example.

9. Discuss any two methods of deadlock prevention.

10. Explain Bankers algorithm with example.

11. Discuss briefly about the deadlock recovery.

12. Explain deadlock avoidance briefly.

13. Explain the structure and the functions of monitor.

14. Discuss about the requirements of critical-section problem.

15. Explain the producer and consumer problem for synchronization.

1. In computing, a process is an instance of a program that is currently being executed. It


consists of the program code, data, and resources required to run the program. Each
process has its own memory space and executes independently of other processes.
2. Preemptive scheduling is a scheduling technique in which the operating system can
interrupt a running process and allocate the CPU to another process based on priority.
The preemptive scheduler uses a timer interrupt to interrupt the current process and
switch to the next highest priority process.
3. A semaphore is a synchronization mechanism used in operating systems to control
access to shared resources. It is a variable that is used to signal between processes or
threads and is used to protect shared resources from simultaneous access.
4. Throughput is a measure of the number of processes that are completed in a given
period of time. Waiting time is the time that a process spends waiting in the ready
queue before it is allocated CPU time.
5. A deadlock situation occurs when two or more processes are waiting for each other to
release a resource, resulting in a situation where none of the processes can continue.
Deadlocks can occur in a multi-process system and can lead to a system crash or other
undesirable outcomes.
6. Two scheduling criteria are:

a) Priority-based scheduling: In this type of scheduling, processes are assigned priorities


and the scheduler selects the process with the highest priority to execute first.

b) Round-robin scheduling: In this type of scheduling, the scheduler allocates a fixed


time slice to each process in a cyclic manner, so that each process gets an equal amount
of CPU time.

7. The main difference between a binary semaphore and a counting semaphore is that a
binary semaphore has only two states (0 and 1), while a counting semaphore can have
any value greater than or equal to 0.
8. Two necessary conditions for a deadlock are:

a) Mutual exclusion: At least one resource must be held in a non-shareable mode,


meaning only one process can access it at a time.

b) Hold and wait: A process holding at least one resource is waiting to acquire additional
resources held by other processes.

9. A thread is a lightweight process that can run concurrently with other threads within the
same process. Threads share the same memory space and resources as their parent
process and can improve performance by allowing multiple tasks to be performed
simultaneously.
10. The main difference between pre-emptive and non-pre-emptive scheduling is that in
pre-emptive scheduling, the operating system can interrupt a running process and
allocate the CPU to another process based on priority, while in non-pre-emptive
scheduling, a running process must voluntarily release the CPU.
11. Context switching is the process of saving the state of a process or thread so that it can
be resumed later and restoring the state of another process or thread so that it can be
executed. Context switching occurs when the operating system switches from one
process or thread to another.
12. Short-term scheduler, also known as CPU scheduler, is responsible for selecting which
process from the ready queue will be allocated the CPU next. The short-term scheduler
is invoked frequently, typically every few milliseconds, and is responsible for making
efficient use of available CPU time.
13. The dispatcher is responsible for loading the process into memory and starting its
execution. The dispatcher is invoked by the CPU scheduler and performs context
switching between processes.
14. A race condition is a situation in which the behavior of a program depends on the
relative timing of events, such as the order in which multiple threads access a shared
resource. A race condition can lead to unpredictable or incorrect behavior of the
program.
15. The critical-section problem is a problem that occurs when multiple processes or
threads access a shared resource concurrently. The critical section refers to the part of
the program where the shared resource is accessed.
1. Advantages of multiprocessor systems:
 Increased throughput and performance due to the ability to execute multiple tasks
simultaneously
 Increased reliability and fault tolerance as multiple processors can handle failures or
errors
 Improved resource utilization as tasks can be allocated to processors with available
resources
 Scalability, as additional processors can be added to the system to handle increased
workload.

Disadvantages of multiprocessor systems:

 Increased complexity in design and implementation


 Increased communication overhead between processors, which can lead to performance
degradation
 Difficulty in writing parallel programs that take advantage of multiple processors
 Higher cost due to the need for additional hardware and software.
2. Three functions of process management are:
 Process creation: creating new processes and allocating resources to them
 Process scheduling: determining which processes will be executed by the CPU and for
how long
 Process synchronization: managing access to shared resources by multiple processes
and ensuring that they do not interfere with each other.
3. Pre-emptive scheduling and non-preemptive scheduling are two different scheduling
techniques used in operating systems. Pre-emptive scheduling allows the operating
system to interrupt a running process and allocate the CPU to another process based on
priority, while non-preemptive scheduling requires a running process to voluntarily
release the CPU.
4. PCB stands for Process Control Block, which is a data structure used by the operating
system to store information about a particular process in the system. It contains
important details about the process, such as its current state, priority level, and the
resources it is using.

The diagram below shows an example of a PCB:


yamlCopy code
Process ID: 1234 State: Running Priority: High Program Counter: 0x12345678 Registers: - EAX:
0x0012FF00 - EBX: 0x00AABBCC - ECX: 0x00456789 - EDX: 0x00FFEE11 Memory Limits: - Start
Address: 0x00010000 - End Address: 0x000FFFFF

In this example, the PCB is for a process with ID 1234 that is currently running in the
system. Its priority level is set to High, and the program counter is pointing to memory
location 0x12345678. The PCB also contains information about the process's register
values and memory limits.

5. Job scheduler and CPU scheduler are two types of schedulers used by an operating
system to manage processes in the system.
 Job Scheduler: It is responsible for selecting which job or process to run next from the
pool of waiting processes. The job scheduler usually follows a set of rules, such as first-
come, first-served (FCFS) or shortest job first (SJF) to determine the next process to
execute.
 CPU Scheduler: It is responsible for selecting which process to run next on the CPU. The
CPU scheduler's main objective is to maximize CPU utilization and throughput while
minimizing the turnaround time and waiting time of processes. There are several
algorithms that the CPU scheduler can use, such as Round Robin, Priority Scheduling,
and Shortest Remaining Time First.
6.

a) Multi-level queue scheduling is a scheduling algorithm where processes are divided


into separate queues based on their properties, such as priority level or CPU
requirements. Each queue has its own scheduling algorithm, and the processes are
moved between the queues based on their changing properties. For example, a process
that requires a lot of CPU time may initially be placed in a high-priority queue, but as it
uses up more CPU time, it may be moved to a lower-priority queue.

b) Multi-level feedback queue scheduling is similar to multi-level queue scheduling, but


with the added feature of allowing processes to move between queues based on their
behavior. For example, a process that uses up a lot of CPU time may be moved from a
high-priority queue to a low-priority queue if it starts to exhibit CPU-bound behavior.
Conversely, a process that has been blocked for a long time may be moved from a low-
priority queue to a higher-priority queue if it starts to exhibit I/O-bound behavior. This
algorithm allows for more flexibility in scheduling and can adapt to changes in process
behavior over time.
1. Scheduling algorithm determines which process to run at a given time. One example of
a scheduling algorithm is Round Robin. In this algorithm, each process is assigned a
fixed time slice, or quantum, during which it can run. The scheduler maintains a queue
of processes, and each process is allowed to run for its assigned quantum before being
preempted and moved to the end of the queue. This ensures that all processes are given
equal access to the CPU.

For example, if the quantum is set to 10 milliseconds and there are three processes in
the queue, the scheduler will allow each process to run for 10 milliseconds before
preempting it and moving it to the end of the queue. The scheduler will continue this
process until all processes have completed.

2. The different criteria for evaluating scheduling algorithms are:


 CPU utilization: The percentage of time that the CPU is being used to execute processes.
 Throughput: The number of processes that are completed per unit of time.
 Turnaround time: The time taken from the submission of a process to its completion.
 Waiting time: The time a process spends waiting in the ready queue.
 Response time: The time taken for a system to respond to a user request.
3. CPU scheduling criteria: CPU scheduling is the process of selecting the process from the
ready queue to allocate the CPU for execution. The following are the different CPU
scheduling criteria:
 CPU Utilization: The CPU should be utilized as much as possible.
 Throughput: The number of processes that complete their execution per unit time.
 Turnaround Time: The total time taken by a process to complete its execution from the
time of submission.
 Waiting Time: The total time a process spends in the waiting queue.
 Response Time: The time taken by a system to respond to a user request.
4. Process: A process is a program in execution. It consists of the code, data, and resources
required to execute the program. The following diagram shows the different states of a
process:

New -> Ready -> Running -> Blocked -> Ready -> Terminated (1) New: The process is
created but has not yet been admitted to the system. (2) Ready: The process is waiting
for the CPU to be allocated for execution. (3) Running: The process is currently executing
on the CPU. (4) Blocked: The process is waiting for some event, such as input/output or
a semaphore signal, to occur. (5) Terminated: The process has completed its execution
and has been removed from the system.

5. Synchronization: Synchronization is the coordination of multiple processes to ensure


they do not interfere with each other. Synchronization is necessary in multi-process
systems to ensure data consistency and to prevent race conditions. For example, if two
processes simultaneously try to write to the same memory location, the data may
become corrupted.
6. Multithreading models: The following are the different multithreading models: (1) Many-
to-One: Multiple user-level threads are mapped to a single kernel-level thread. (2) One-
to-One: Each user-level thread is mapped to a kernel-level thread. (3) Many-to-Many:
Multiple user-level threads are mapped to a smaller or equal number of kernel-level
threads. (4) Two-level: A combination of one-to-one and many-to-many models.
7. Dinning philosopher problem: The dining philosopher problem is a classic
synchronization problem in computer science. The problem occurs when multiple
philosophers sit at a table and eat with chopsticks. Each philosopher must hold two
chopsticks to eat, but there are only five chopsticks available. The problem is to design a
synchronization protocol that allows the philosophers to eat without causing a
deadlock.
8. The resource allocation graph is a graphical representation of the resource allocation
and request relationships between different processes in a system. The graph consists of
two types of nodes: process nodes and resource nodes. Process nodes represent the
processes in the system, and resource nodes represent the resources that are being
used by these processes. An edge from a process node to a resource node indicates that
the process is holding that resource, while an edge from a resource node to a process
node indicates that the resource is being requested by that process.

For example, consider a system with three processes (P1, P2, and P3) and three
resources (R1, R2, and R3). Suppose P1 is holding R1, P2 is holding R2, and P3 is holding
R3. Additionally, P1 is requesting R3, P2 is requesting R1, and P3 is requesting R2. The
corresponding resource allocation graph for this scenario is as follows:

luaCopy code
P1 --> R1 / \ / \ P2 --> R2 <-- P3 \ / \ / R3

This graph shows that there is a cycle in the allocation and request relationships, which
indicates a potential deadlock.

9. Two methods of deadlock prevention are:

a) Resource allocation denial: In this method, the system prevents deadlock by not
allocating resources to a process if it believes that the allocation may lead to a deadlock.
This can be done by requiring processes to declare their maximum resource needs in
advance, and then checking whether the allocation of resources will result in a deadlock.
If so, the allocation is denied.

b) Process termination: In this method, the system monitors the state of processes and
terminates a process if it is likely to cause a deadlock. This can be done by detecting
processes that are holding resources but are unable to make progress because they are
waiting for additional resources that are held by other processes. By terminating such
processes, the system can prevent the deadlock from occurring.
10. The Banker's algorithm is a deadlock-avoidance algorithm that is used to ensure that a
system does not enter a deadlock state. The algorithm works by keeping track of the
maximum resource needs of each process, the number of resources currently available,
and the number of resources being used by each process.

For example, suppose there are four resources (R1, R2, R3, and R4) and five processes
(P1, P2, P3, P4, and P5), each with different resource needs and current resource
allocations. The maximum resource needs for each process are as follows:

luaCopy code
Process R1 R2 R3 R4 ------------------------------------- P1 3 1 2 1 P2 2 2 1 2 P3 1 3 1 3 P4 2 2 2 1 P5 1
1 3 2

Suppose that the current resource allocations are as follows:

luaCopy code
Resource R1 R2 R3 R4 ----------------------------------- Allocated 1 1 1 1 Available 2 2 2 2

Using the Banker's algorithm, the system can determine whether it is safe to allocate
additional resources to a process without causing a deadlock. The algorithm works by
simulating the allocation of resources and checking whether the system can reach a safe
state where all processes can complete their execution.

11. Deadlock recovery refers to the process of resolving a deadlock that has already
occurred in a system. There are several methods for deadlock recovery, including killing
one or more processes involved in the deadlock, preempting resources from processes,
or rolling back the state of the system to a previous checkpoint.

Killing processes involved in the deadlock can be an effective way to resolve the issue,
but it can also result in data loss and disruption to the system. Preempting resources
from processes can help to break the deadlock, but it can also cause delays and result in
reduced system performance. Rolling back the state of the system to a previous
checkpoint can help to avoid the deadlock, but it can also result in data loss and other
problems.

12. Deadlock avoidance is a method of preventing deadlock by analyzing the resource


needs of processes and allocating resources in a way that avoids the formation of
deadlocks. This can be done by using algorithms like Banker's algorithm, which keeps
track of the maximum resource needs of each process and ensures that resources are
allocated in a way that does not result in a deadlock.
13. A monitor is a synchronization construct used in concurrent programming to coordinate
access to shared resources. A monitor consists of a set of procedures and variables that
are used to protect a critical section of code. The structure of a monitor typically
includes procedures for entering and leaving the monitor, as well as variables for storing
information about the current state of the monitor.

The functions of a monitor include ensuring that only one process can access the critical
section of code at a time, preventing race conditions and other synchronization issues,
and coordinating the access to shared resources among multiple processes.

14. The critical-section problem is a classic synchronization problem in computer science


that arises when multiple processes or threads share a common resource, such as a
piece of memory or a file. The problem involves coordinating access to the shared
resource to ensure that concurrent access does not result in data corruption or other
undesirable behaviors.

To solve the critical-section problem, certain requirements must be met. These


requirements are:

 Mutual Exclusion: Only one process can access the shared resource at any given time.
This is necessary to prevent conflicts and ensure consistency.
 Progress: If no process is currently accessing the shared resource and one or more
processes are trying to access it, then only those processes that are not blocked should
be considered as candidates for accessing the resource, and the selection among these
should be made in a bounded amount of time.
 Bounded Waiting: A process that is waiting to access the shared resource should not
have to wait indefinitely. There should be a limit on how long a process can wait for
access.

These requirements ensure that the critical-section problem is solved in a fair and
efficient manner.

15. The producer-consumer problem is another classic synchronization problem in


computer science. It arises when a producer process generates data that must be
consumed by one or more consumer processes. The problem involves coordinating the
activities of the producer and consumer processes to ensure that the data is produced
and consumed correctly.

To solve the producer-consumer problem, certain requirements must be met. These


requirements are:

 Mutual Exclusion: Only one process can access the shared buffer at any given time. This
is necessary to prevent conflicts and ensure consistency.
 Full/Empty Buffer Checking: The producer process should not produce data when the
buffer is full, and the consumer process should not consume data when the buffer is
empty.
 Synchronization: The producer and consumer processes must be synchronized so that
they do not interfere with each other's activities.

One solution to the producer-consumer problem involves using a bounded buffer,


which is a fixed-size buffer that can hold a limited number of items. The producer
process adds items to the buffer, while the consumer process removes items from the
buffer. When the buffer is full, the producer process waits until space becomes available.
When the buffer is empty, the consumer process waits until data becomes available. This
approach ensures that the producer and consumer processes do not interfere with each
other and that data is produced and consumed correctly.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy