OSY Summer 23
OSY Summer 23
b) Difference between Time sharing system and Real time system (any 2 points)
Ans.
S Time Sharing Operating System Real-Time Operating System
.
N
O
1 In time sharing operating system, quick While in real time operating system,
. response is emphasized for a request. computation tasks are emphasized before
its nominative point.
2 In this operating system Switching While in this operating system Switching
. method/function is available. method/function is not available.
3 In this operating system any modification in the While in this modification does not take
. program can be possible. place.
4 In this OS, computer resources are shared to But in this OS, computer resources are not
. the external. shared to the external.
5 It deals with more than processes or Whereas it deals with only one process or
. applications simultaneously. application at a time.
6 In this OS, the response is provided to the user While in real time OS, the response is
. within a second. provided to the user within time constraint.
7 In time sharing system, high priority tasks can Real time operating systems, give users the
. be preempted by lower priority tasks, making it ability to prioritize tasks so that the most
impossible to guarantee a response time for critical task can always take control of the
your critical applications. process when needed.
c) State any four services of operating system.
Ans.
Process management: Manages the creation and termination of processes
Memory management: Ensures that programs have enough memory to run
File management: Allows users to create, delete, modify, and organize files and
directories
Security: Protects the system from unauthorized access and threats
Multitasking: Allows multiple applications to run simultaneously
Network management: Manages network connections and communications
g) Define Deadlock.
Ans.Deadlock is a situation in computing where two or more processes are unable to proceed
because each is waiting for the other to release resources. Key concepts include mutual
exclusion, resource holding, circular wait, and no preemption.
Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. This is a
practical example of deadlock.
2. External Fragmentation
External fragmentation occurs when a storage medium, such as a hard disc or solid-state drive,
has many small blocks of free space scattered throughout it. This can happen when a system
creates and deletes files frequently, leaving many small blocks of free space on the medium.
When a system needs to store a new file, it may be unable to find a single contiguous block of
free space large enough to store the file and must instead store the file in multiple smaller blocks.
This can cause external fragmentation and performance problems when accessing the file.
Fragmentation can also occur at various levels within a system. File fragmentation, for example,
can occur at the file system level, in which a file is divided into multiple non-contiguous blocks
and stored on a storage medium. Memory fragmentation can occur at the memory management
level, where the system allocates and deallocated memory blocks dynamically. Network
fragmentation occurs when a packet of data is divided into smaller fragments for transmission
over a network.
2. Advantages of Multiprocessor OS
Multiprocessor operating systems offer a range of advantages, particularly in performance,
scalability, and reliability. Here are some of the key benefits:
a. Increased Performance and Throughput
Parallelism: In a multiprocessor system, tasks are executed in parallel, which
significantly increases the speed of computation and system throughput. Multiple
processes or threads can run at the same time on different processors.
Faster Execution of Multithreaded Applications: Applications that are designed to run
in parallel (multithreaded applications) benefit greatly from multiprocessor systems.
Threads from the same application can be distributed across processors, leading to faster
completion times.
Reduced Waiting Time: Since multiple processors can handle different tasks
simultaneously, the overall waiting time for executing processes is reduced. This is
especially useful for high-performance computing tasks like scientific simulations,
machine learning, and data analysis.
b. Improved Reliability and Fault Tolerance
Redundancy: If one processor fails in a multiprocessor system, the system can continue
to function by redistributing the workload to the remaining processors. This makes the
system more reliable and fault-tolerant compared to single-processor systems.
Graceful Degradation: In case of hardware failure or overload, the system doesn't crash
entirely. Instead, it slows down as the workload is distributed across fewer processors,
providing better fault tolerance.
Load Sharing: Tasks can be distributed among available processors, ensuring that no
single processor is overburdened, which enhances system stability.
c. Scalability
Scalable Performance: One of the biggest advantages of multiprocessor systems is their
ability to scale performance by adding more processors. If the system needs to handle
more workload, additional CPUs can be added to improve performance.
Support for Large Applications: Multiprocessor OSs can efficiently support large
applications that require significant computational power. For instance, applications in
fields like weather forecasting, physics simulations, or complex financial modeling
benefit from the additional computing resources.
d. Efficient Resource Utilization
Resource Sharing: In a multiprocessor system, CPUs share memory, I/O devices, and
other system resources, which reduces idle time for hardware components. This leads to
better resource utilization compared to single-processor systems, where resources may
sit23 idle while waiting for the CPU.
Cost Efficiency: A multiprocessor system can be more cost-effective in high-
performance environments because a single system with multiple processors may be less
expensive and more efficient than maintaining multiple independent systems.
e. Faster Response Time
Reduced Latency: Multiprocessor systems provide faster response times for interactive
applications, especially when there are multiple tasks or users. Each processor can handle
a different request, resulting in quicker responses.
f. Support for Multiuser Environments
Concurrent User Support: Multiprocessor OSs can better handle multiple users
simultaneously. Each processor can work on a different user’s process, ensuring that each
user experiences less delay or slowdown, which is critical for servers and shared
computing environments.
Advantages
The main advantage is there can be more than two files with same name, and would be
very helpful if there are multiple users.
A security would be there which would prevent user to access other user’s files.
Searching of the files becomes very easy in this directory structure.
New: When a process enters into the system, it is in new state. In this
state a process is created. In new state the process is in job pool.
Ready: When the process is loaded into the main memory, it is ready
for execution. In this state the process is waiting for processor
allocation.
Running: When CPU is available, system selects one process from
main memory and executes all the instructions from that process. So,
when a process is in execution, it is in running state. In single user
system, only one process can be in the running state. In multiuser
system, there can be multiple processes which are in the running
state.
Waiting State: When a process is in execution, it may request for I/O
resources. If the resource is not available, process goes into the
waiting state. When the resource is available, the process goes back to
ready state.
Terminated State:
When the process completes its execution, it goes into the terminated
state. In this state the memory occupied by the process is released.
Operations on the Process
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and
will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process
and start executing it. Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may
come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.
Virtual memory is a method of using hard disk space to provide extra memory. It simulates
additional main memory.In Windows operating system, the amount of virtual memory available
equals the amount of free main memory plus the amount of disk space allocated to the swap
file.Fig. 4.8.1 shows logical view of virtual memory concept.A swap file is an area of your hard
disk that is set aside for virtual memory. Swap files can be either temporary or permanent.Virtual
memory is stored in the secondary storage device. It helps to extend additional memory capacity
and work with primary memory to load applications. The virtual memory will reduce the cost of
expanding the capacity of physical memory.
The implementations of virtual memory will different for operating systems to operating system.
Each process address space is partitioned into parts that can be loaded into primary memory
when they are needed and written back to secondary storage otherwise. Address space partitions
have been used for the code, data and stack identified by the compiler and relocating hardware.
The portion of the process that is actually in main memory at any time is defined to be the
resident set of the process.The logical addressable space is referred to as virtual memory. The
virtual address space is much larger than the physical primary memory in a computer system.
The virtual memory works with the help of secondary storage device and its speed is low
compared to the physical storage location.
5. Attempt any TWO of the following: 12
a) Explain the working of interprocess communication considering
i) Shared memory
ii) Message passing
Ans.1. Shared memory
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks.
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous
memory at a particular instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the
system does not have to look for a contiguous chunk of memory.
This method does not suffer from external fragmentation. This makes it relatively better
in terms of memory utilization.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
It does not support random or direct access. We can not directly access the blocks of a
file. A block k of a file can be accessed by traversing k blocks sequentially (sequential
access ) from the starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains
the disk address of the ith file block. The directory entry contains the address of the index block
as shown in the image:
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked allocation.
For very small files, say files that expand only 2-3 blocks, the indexed allocation would
keep one entire block (index block) for the pointers which is inefficient in terms of
memory utilization. However, in linked allocation we lose the space of only 1 pointer per
block.