DSE 3153 26 Sep 2023

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

B.

Tech Vth Semester Mid-Term Examination September 2023


ANSWER SCHEME FOR DSE 3153 OPERATING SYSTEMS
Duration: 2 hr Date: 26-09-2023 Max. Marks: 30
Type: MCQ

Q1. If there are N processes in the ready queue and the time quantum is Q, then each process gets
1/N of the CPU time in chunks of at most Q time units at once. In this case, no process waits more
than ___________ time units. (0.5) -CO2 APPLICATION
1. Q-N
2. (N-1)*Q
3. N*(Q-N)
4. N*Q

Q2. Consider the following statements about process state transitions for a system using preemptive
scheduling.

S1: A running process can move to a ready state.


S2: A ready process can move to a ready state.
S3: A blocked process can move to a running state
S4: A blocked process can move to a ready state
Which of the above statements is TRUE? (0.5) -CO2 UNDERSTAND

1. S1, S2 and S3 only


2. S1, S2 and S4 only
3. S1, S2, S3 and S4
4. S2 and S3 only

Q3. Select the correct sequence of steps taken by the processor when an interrupt occurs
(i) Switch from user mode to kernel mode.
(ii) Set the program counter to the first instruction of the interrupt handling routine.
(iii) Save the current context. (0.5) -CO1 UNDERSTAND
1. (i), (ii), (iii)
2. (i), (iii), (ii)
3. (ii), (i), (iii)
4. (iii), (ii), (i)
Q4. Choose from below, advantages of kernel-level threads:
(i) Kernel can simultaneously schedule multiple threads from the same process on multiple
processors.
(ii) Kernel routines can be multithreaded.
(iii) If one thread of a process is blocked then kernel can schedule another thread from the same
process. (0.5) -CO2 UNDERSTAND
1. (i) only
2. (i), (ii) only
3. (i), (ii), (iii)
4. (i), (iii) only
Q5. Which of the following is an appropriate four-state model for a process? (0.5) -CO1
UNDERSTAND

1.

2.

3.
4.

Q6. In which of the following Multithreading model, the entire process gets blocked if a thread
makes a blocking system call.? (0.5) -CO2 UNDERSTAND
1. One-to-one
2. Many-to-one
3. Many-to-Many
4. Two-level
Q7. Which of the following need not be saved on a context switch between processes? (0.5) -CO1
UNDERSTAND
1. General purpose registers
2. Program counter
3. Translation look aside buffer
4. Process state information
Q8. Which of the following is/are reason(s) for blocking a running process? (0.5) -CO1 UNDERSTAND
(i) A call from the running program to a procedure that is a part of OS code.
(ii) A running process may initiate an I/O operation.
(iii) A user may block a running process.
1. (i), (ii) only
2. (ii), (iii) only
3. (i), (iii) only
4. (iii) only
Q9. When a process creates a new process using the fork () operation, which of the following states
is shared between the parent process and the child process? (0.5) -CO1 UNDERSTAND
1. Stack
2. Shared memory Segments
3. Heap
4. CPU registers
Q10. The maximum number of processes that can be in Ready state for a computer system with n
CPUs is: (0.5) -CO2 APPLICATION
1. n
2. n2
3. 2n
4. Independent of n

Type: DES

Q11. a. Consider the below table of four process under multilevel queue scheduling. Q_no denotes
the queue of the process, priority of queue 1is greater than queue 2. Queue 1 uses round robin with
time quantum of two units and queue 2 uses first come first serve scheduling algorithm.
b. Consider all the five processes are in same queue and it follows preemptive shortest job first
scheduling.
c. Consider all the five processes are in same queue and it follows preemptive priority scheduling.
For above questions a, b, and c, draw Gantt chart and calculate average turnaround time.

PID P1 P2 P3 P4 P5
AT 0 1 1.5 0 2
BT 6 4 2 5 8
Q_no 1 1 2 1 2
Priority 3 4 5 2 1

PID: Process ID, AT: Arrival time, BT: Burst time, Q_no: Queue no in which process reside, Priority:
Lower number have the highest priority (4) -CO2 APPLICATION

Answer:
Q12. State with clear justifications whether the following statements are true or false (No marks will
be awarded if the justification is not correct).
a) Operation of a time sharing system is identical to operation of a traditional multiprogramming
system executing the same programs if the time slice exceeds the CPU burst of every program.
Answer:
FALSE. If the time slice exceeds the CPU burst, then there will be no pre-emption; every process will
be voluntarily releasing the CPU after completion of the CPU burst. However, the order of execution
of the programs will be different in traditional multiprogramming and time sharing. In time sharing,
the programs will be run cyclically (P1, P2, P3, P4, P1, P2, etc.). However, in traditional
multiprogramming, we give priority to the program that arrived before; so when the program has
finished its I/O the CPU will be given back to it.
b) The instruction to change the processor from user mode to supervisor mode is a privileged
instruction.
Answer:
FALSE. There is a single instruction (viz. syscall or trap) that can change the processor mode from
user to supervisor. Quite obviously, a user program must be allowed to execute such an instruction,
and hence it cannot be privileged. If you are already in the privileged mode, you do not require any
separate privileged instruction to change the mode from user to privileged.
c) For multithreaded programming, we need to create shared memory segments through which the
threads can access common data.
Answer:
FALSE. Peer threads share global data. So anything they need to share can be placed in the global
data area. There is no need to create shared memory segments for this.
d) Pre-emptive CPU scheduling algorithms can result in shorter average waiting time as compared to
non pre-emptive scheduling algorithms. (4) -CO2,CO1 APPLICATION
Answer:
TRUE. A long-running process can increase the waiting times for all the other processes that are
ready to run. So if we can pre-empt the long process and give the other (shorter) processes to run,
the average waiting time will reduce.
Scheme: 1 mark each for a, b, c and d = 4 marks

Q13. What are the differences between user-level threads and kernel-supported threads? Under
what circumstances is one type "better" than the other? Is Switching among threads in the same
process more efficient than switching among processes? Justify. (3) -CO2 UNDERSTAND
Answer:
User-level threads vs kernel-supported threads:
User-level threads are managed entirely by user-level libraries and the operating system kernel is
unaware of their existence. Whereas, kernel-supported threads are managed directly by the
operating system. Kernel threads need not be associated with a process whereas every user thread
belongs to a process. [1 mark]
Under what circumstances is one type "better" than the other?
User-level threads are much faster to switch between, as there is no context switch; further, a
problem-domain-dependent algorithm can be used to schedule among them. CPU-bound tasks with
interdependent computations, or a task that will switch among threads often, might best be handled
by user-level threads.
Kernel-level threads are scheduled by the OS, and each thread can be granted its own timeslices by
the scheduling algorithm. The kernel scheduler can thus make intelligent decisions among threads,
and avoid scheduling processes which consist of entirely idle threads (or I/O bound threads). A task
that has multiple threads that are I/O bound, or that has many threads (and thus will benefit from
the additional timeslices that kernel threads will receive) might best be handled by kernel threads.
[1 mark]
Switching among threads in the same process is more efficient than switching among processes:
Threads share code, data, information about open files, etc. To switch from one thread to another,
we need only to save the registers (including stack pointer), and restore the registers for the new
thread to be resumed, which can be pretty fast. But in processes, all the segments like code, data,
heap and stack can be different, and time to save and restore status can take much longer time.
[1mark]
Q14. Consider a system with 100 processes in the Ready Queue. If round robin scheduling algorithm
is used with a scheduling overhead of 5 microseconds, what must be the time quantum to ensure
that each process is guaranteed to get its turn at the CPU after 1 second? (3) -CO2 ANALYSIS
Answer:

Q15. Describe the typical return values of the fork() system call and what each value signifies. How
many times does the following C program print "Hello"? Explain.
main ()
{
for (int i=0; i<8; i++)
{
if (i%2 == 0) break;
fork ();
}
printf ("Hello\n");

} (3) -CO1 APPLICATION

Answer:
The return values are as follows:

In the Parent Process: The return value in the parent process is the process ID (PID) of the newly
created child process. This PID is a positive integer, and it is a unique identifier for the child process.
The parent process can use this PID to track and manage the child process, such as waiting for its
termination using wait() or sending signals to it.

In the Child Process: The return value in the child process is 0 (zero). This value signifies that the
process is the child process.
In Case of Error: If fork() encounters an error and cannot create a new process, it returns -1. This
indicates that the fork operation failed, and an error code can be retrieved using errno to determine
the cause of the failure. It's important to handle the return values of fork() appropriately in your
code to ensure that both the parent and child processes perform the desired tasks and to handle any
potential errors that may occur during process creation.

Hello get printed only once since if block work as trap and control come out without executing
fork statement.

Scheme:
First part: each point 0.5 mark
Second part:1.5 marks

Q16. With the help of a process state diagram, explain the roles of Short-term, Medium-term and
Long-term schedulers in an operating system. Also explain how the Long-term scheduler controls the
degree of multi-programming. (3) -CO1 UNDERSTAND

Answer:

 Short-term scheduler (or CPU scheduler) – selects which process should be executed next
and allocates CPU. Sometimes the only scheduler in a system. Short-term scheduler is
invoked frequently (must be fast).

 Long-term scheduler (or job scheduler) – selects which processes should be brought into
the ready queue. Long-term scheduler is invoked infrequently (may be slow). The long-
term scheduler controls the degree of multiprogramming.
 Medium-term scheduler is used during the following situations: Remove process from
memory & store on disk, bring back in from disk to continue execution (swapping).
Medium-term scheduler can be added if degree of multiple programming needs to
decrease.
The long-term scheduler controls the degree of multiprogramming (the number of processes in
memory). If the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system. Thus, the long-term
scheduler may need to be invoked only when a process leaves the system. Because of the longer
interval between executions, the long-term scheduler can afford to take more time to decide
which process should be selected for execution.
Scheme:
 Process state diagram with the schedulers marked [0.5 mark]

 Explanation of role of each scheduler [0.5*3 = 1.5 marks]


 Explanation of LTS controlling the degree of multiprogramming [1 mark]

Q17. Differentiate between the following in the context of Inter Process Communication:
a. Shared Memory and Message Passing
b. Direct and Indirect Communication
c. Synchronous and Asynchronous Message Passing (3) -CO2 UNDERSTAND

Answer:
a. Shared Memory and Message Passing

Shared Memory: In shared memory IPC, processes communicate by accessing a common memory
area. This method requires synchronization mechanisms (like semaphores or mutexes) to ensure
data consistency. It’s faster since data doesn't need to be copied between processes. It can be more
complex to use correctly, especially in large systems or among a large number of processes.

Message Passing: In message passing IPC, processes communicate by sending messages to each
other. It's more controlled and less error-prone compared to shared memory as there's no shared
state to manage. Message passing can be slower than shared memory since data needs to be copied
between process address spaces. It’s often easier to implement and manage, especially in
distributed systems.

b. Direct and Indirect Communication

Direct Communication:
Processes communicate with each other directly using a communication link.
Each process needs to explicitly name the other process it wants to communicate with.
It's simpler but less flexible, as the communicating processes need to be known in advance.

Indirect Communication:
Processes communicate through an intermediary (like a mailbox or a message queue).
Processes don’t need to know each other explicitly; they only need to know the intermediary.
It’s more flexible but can be slightly more complex to manage.
c. Synchronous and Asynchronous Message Passing
Synchronous Message Passing:
In synchronous message passing, sending and receiving processes are blocked until the message is
delivered and acknowledged. It ensures that the message has been received but can lead to delays if
either process is slow.
Asynchronous Message Passing:
In asynchronous message passing, sending and receiving processes are not blocked; the message is
placed in a queue. It's faster and more flexible but can be more complex to manage as processes
may need to handle message delivery failures or delays.
Scheme: 1 mark for each of a,b,c [1*3 = 3 marks]

Q18. What is the advantage of having different time quantum lengths at different levels of a multi-
level queue scheduling system? (2) -CO2 UNDERSTAND
Answer: A process can move between the various queues; aging can be implemented this way

Response Time Improvement


Reduced Context Switching Overhead
Adaptive Scheduling
Resource Efficiency
Scheme: Each point for 0.5 mark

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy