OS
OS
Q.(1) Define real time operating system? What are the factors that need to be
considered to
determine the degree of Multiprogramming in a system?
Context Switching Overhead: Frequent context switching can degrade performance. The
overhead associated with saving and loading process states must be balanced with
the need for concurrent execution.
User Requirements: The expectations for response time and throughput from the end-
users can influence how many processes should be running simultaneously without
compromising system performance.
a. Layered System:
A Layered Operating System architecture organizes the OS into distinct layers, each
with specific functionalities and responsibilities. The top layer provides user
interfaces and application support, while lower layers handle hardware interactions
and system-level operations. This modular structure promotes simplicity and
maintainability, as each layer can be developed and updated independently. For
example, in a typical layered OS, the user interface might interact with a
middleware layer that communicates with hardware drivers. This separation allows
for easier debugging and enhances security since lower layers can be protected from
direct user access, limiting potential vulnerabilities.
b. Virtual Machine:
A Virtual Machine (VM) is an abstraction that enables the execution of multiple
operating systems or instances of an OS on a single physical hardware system. The
hypervisor or virtual machine monitor (VMM) manages these VMs, allocating resources
and isolating their operations. Each VM operates as if it were a separate physical
machine, allowing for efficient resource utilization and flexibility. This
structure is particularly useful for server consolidation, testing different OS
versions, and creating isolated environments for software development. VMs enhance
portability and scalability, enabling applications to run in diverse environments
without modification.
c. Client-Server System:
A Client-Server System architecture separates service providers (servers) from
service requesters (clients). In this model, clients initiate requests for
resources or services, which servers fulfill. This structure is highly scalable,
allowing multiple clients to connect to a single server or a group of servers,
facilitating load balancing and resource management. The client-server model is
widely used in network applications, where clients may be personal computers or
mobile devices, and servers can host databases, applications, or web services. This
separation enhances security, as servers can implement centralized controls, and
promotes modular development, enabling easier updates and maintenance.
System Calls, on the other hand, are the programming interface through which user
programs interact with the operating system's core services. System calls allow
applications to request services from the OS, such as file operations, process
management, and communication. They act as a bridge between user applications and
the operating system, facilitating actions like opening files, creating processes,
or sending network requests.
Assignment II
Q. (1) How would you apply scheduling policy for each of the following cases?
Explain your
reasons for choosing them:
a. The processes arrive at large time intervals
b. The system’s efficiency is measured by the percentage of jobs completed.
c. All the processes take almost equal amounts of time to complete
a. The processes arrive at large time intervals:
Scheduling Policy: First-Come, First-Served (FCFS)
Reasoning: Since processes arrive at large intervals, the system can afford to use
a simple and straightforward scheduling algorithm like FCFS. This method ensures
that processes are executed in the order they arrive, leading to minimal waiting
time for individual processes. Since the arrival times are significantly spaced
apart, the chances of context switching and overhead are reduced, thereby enhancing
overall efficiency. This policy also minimizes the complexity of the scheduler and
keeps it efficient for such sporadic arrivals.
WAIT Operation:
The WAIT operation (also known as P or down operation) is employed when a process
or thread wants to access a resource that may not be immediately available. When a
process invokes WAIT on a semaphore (a synchronization variable), it checks the
semaphore's value. If the value is greater than zero, the process decrements the
semaphore (indicating that a resource is being used) and continues execution. If
the value is zero, the process is blocked and placed in a waiting queue,
effectively yielding control until the resource becomes available. This operation
ensures that processes only access resources when they are free, preventing
conflicts.
SIGNAL Operation:
The SIGNAL operation (also known as V or up operation) is used to indicate that a
resource has been released or is now available. When a process completes its use of
a resource, it invokes SIGNAL on the corresponding semaphore. This operation
increments the semaphore's value, and if any processes are waiting in the queue,
one of them is unblocked and allowed to proceed. SIGNAL thus serves to notify
waiting processes that they can now access the resource.
Q.(3) What is a critical section? Give solution to the critical section problem
using semaphores.
A critical section is a segment of code in a multi-threaded or multi-process
environment where shared resources (such as variables or data structures) are
accessed and manipulated. The critical section problem arises when multiple
processes or threads attempt to enter their critical sections simultaneously,
potentially leading to data inconsistency and unpredictable behavior.
Entering the Critical Section: Each process must execute the following code before
entering its critical section:
f the semaphore value is greater than 0, the process enters the critical section
and decrements the semaphore. If the value is 0, the process is blocked and waits.
Exiting the Critical Section: Once a process finishes executing its critical
section, it must signal the semaphore:
This operation releases the semaphore, allowing other waiting processes to enter
their critical sections.
Assignment III
Q. (1) What is fragmentation ? What are its types? How they are minimized in
different
memory management schemes ?
Fragmentation refers to the inefficient use of memory space that occurs when free
memory is divided into small, non-contiguous blocks. It can lead to reduced
available memory for new processes, ultimately impacting system performance.
Fragmentation is typically categorized into two types:
Types of Fragmentation:
Internal Fragmentation: This occurs when allocated memory blocks are larger than
the requested memory. For example, if a process requests 20 KB but is allocated 32
KB, the remaining 12 KB is wasted, leading to internal fragmentation.
External Fragmentation: This happens when free memory is split into small,
scattered blocks that are not usable for larger processes. Even if there is enough
total free memory, the non-contiguous nature prevents allocation for processes that
require larger contiguous blocks.
Minimizing Fragmentation:
Segmentation: This memory management scheme divides memory into segments based on
logical divisions (e.g., functions, arrays). By using segments, internal
fragmentation is reduced as memory allocation can be more closely aligned with the
actual size of data structures.
Paging: This method breaks physical memory into fixed-size pages and divides
logical memory into pages of the same size. Paging eliminates external
fragmentation, as any free page can be used regardless of its location.
Buddy System: This approach allocates memory in powers of two, which helps reduce
internal fragmentation and allows for efficient merging of free blocks.
Demand Paging and Demand Segmentation are memory management techniques used in
operating systems to optimize the use of physical memory while managing large
processes efficiently.
Demand Paging:
Demand Paging is a memory management scheme where pages of a process are loaded
into memory only when they are needed (i.e., when a page fault occurs). This allows
the operating system to keep physical memory usage low by loading only those pages
that are actively used, rather than the entire process. When a process tries to
access a page that is not currently in memory, a page fault is triggered, and the
required page is loaded from secondary storage (usually a disk) into physical
memory. This approach helps in utilizing memory more efficiently and reduces
loading time, as it only loads necessary pages, leading to improved performance for
large applications.
Demand Segmentation:
Demand Segmentation operates similarly but is based on logical segments rather than
fixed-size pages. In this method, segments of a process (e.g., functions, arrays,
or data structures) are loaded into memory only when required. Each segment can be
of varying sizes, and when a segment fault occurs (when a segment is not in
memory), the operating system loads the needed segment from disk to memory. Demand
segmentation provides a more logical view of memory allocation and can improve the
organization of processes, as it allows for loading only the required segments.
Q.(3) Give difference between physical & logical address. Also show that how
logical
Physical Address:
Definition: A physical address refers to an actual location in the computer’s
physical memory (RAM). It is the address that the memory unit recognizes and uses
to access data.
Context: Physical addresses are used by the memory management unit (MMU) to access
the memory hardware directly.
Example: If the physical memory has a size of 4 GB, addresses range from 0x00000000
to 0xFFFFFFFF.
Logical Address:
Definition: A logical address, also known as a virtual address, is generated by the
CPU during program execution. It represents an address within a process's logical
address space, which may not correspond directly to a physical memory address.
Context: Logical addresses allow processes to use memory without knowing the actual
physical address. This abstraction enables easier memory allocation and process
isolation.
Example: In a system with virtual memory, a process might use a logical address of
0x00001234, which the MMU translates into a physical address.
Translation from Logical to Physical Address:
The translation from logical to physical addresses is managed by the MMU using a
page table in the case of paging systems. When a program accesses a logical
address, the MMU checks the page table to find the corresponding physical address.
For example, if a logical address maps to a page frame in physical memory, the MMU
retrieves the data from the physical address.
Assignment IV
Q.(1) How would you compare different types of disk scheduling algorithm.
Disk scheduling algorithms are essential for managing how read and write requests
are handled on a disk, optimizing performance and minimizing wait times. Here’s a
comparison of several common disk scheduling algorithms:
1. File Naming:
Description: Each file in a file system has a unique name that can include various
characters, depending on the file system's rules.
Significance: Allows users to easily identify and access files.
2. File Structure:
Description: Refers to how data is organized within a file, which can vary (e.g.,
plain text, binary, or structured formats like JSON).
Significance: Impacts how data is read, written, and processed by applications.
3. File Permissions:
Description: Defines who can read, write, or execute a file. Permissions can be set
for different user roles (owner, group, others).
Significance: Ensures security and access control over files.
4. File Types:
Description: Files can be categorized by type (e.g., text, image, executable) and
are often identified by their extensions (e.g., .txt, .jpg).
Significance: Helps the operating system determine how to handle and execute files.
5. File Metadata:
Description: Additional information about files, including size, creation date,
modification date, and access time.
Significance: Useful for managing files and optimizing performance, as well as for
backup and recovery purposes.
6. Directory Structure:
Description: Organizes files into directories (or folders), allowing for
hierarchical storage and easy navigation.
Significance: Enhances user experience by providing a logical way to access files.
Assignment V
Consistency: Ensures that all nodes have a consistent view of shared data,
preventing conflicts and anomalies.
Coordination: Allows processes to communicate effectively, especially when they
rely on the same resources or data.
Order of Operations: Guarantees that operations occur in a defined order, which is
vital for processes that depend on the outcomes of others.
Clock Synchronization Algorithm: Berkeley Algorithm
One widely used algorithm for clock synchronization in distributed systems is the
Berkeley Algorithm. Here’s how it works:
Time Request: The coordinator periodically polls all other processes to request
their local clock times.
Time Collection: Each process responds with its current time. The coordinator
collects these timestamps.
Calculate Average Time: The coordinator computes the average time based on the
received timestamps. It may discard extreme values to reduce the impact of
outliers.
Time Adjustment: The coordinator sends messages back to each process with the
difference between the average time and the process's local time. Each process then
adjusts its clock accordingly.
Q.(2) How would you compare Network, Distributed & Multiprocessor Operating
System.