Assignment#1
Assignment#1
Bootstrap Loader: This function locates the operating system and initiates the boot process. It
loads the OS into the computer's main memory (RAM) from the hard drive or another storage
device.
BIOS Setup Utility: This provides a configuration interface where users can set hardware
configurations, system time, boot order, and other settings. It allows for system customization
and optimization.
System Management: BIOS contains low-level software that controls various hardware functions,
such as the display screen, keyboard, and disk drives. It facilitates communication between the
operating system and the hardware components.
Q2: Explain the working of batch operating system. Also write its advantages
and disadvantages?
Ans: A batch operating system executes a series of jobs (or batches) without user interaction
during their execution. The main steps in the working of a batch operating system are:
Job Submission: Users submit jobs to a central location, typically a punch card or magnetic tape.
Job Scheduling: The system's scheduler arranges jobs in a queue based on criteria like priority or
arrival time.
Job Execution: Jobs are executed sequentially without user interaction. The system loads the first
job, executes it, and then proceeds to the next job.
Output Handling: After execution, the system collects the output (results) and sends them to a
specified output device or location, such as a printer or disk storage.
High Throughput: By processing jobs in batches, the system can handle a large volume of jobs
quickly, increasing overall throughput.
Reduced Setup Time: Grouping jobs reduces the overhead of setting up and tearing down each
job individually.
Debugging Difficulties: Identifying and fixing errors can be challenging because jobs are
processed without user intervention, and errors may only be detected after a job completes.
Time Delays: Jobs may experience delays if they have to wait for their turn in the queue, leading
to longer turnaround times for some tasks.
Rigid Scheduling: The predefined scheduling can lead to inefficiencies if job priorities change or if
certain jobs require immediate attention.
Q3: Explain real time OS and also write its advantages and
disadvantages?
Ans : A Real-Time Operating System (RTOS) is designed to handle tasks with strict timing
constraints. It ensures that critical operations are executed within a specified time frame, making
it ideal for applications requiring high reliability and predictability, such as embedded systems,
industrial robots, and aerospace systems.
Advantages of RTOS
Deterministic Timing: RTOS provides predictable response times, ensuring that critical tasks are
completed within their deadlines.
High Reliability: It ensures high reliability and stability, making it suitable for mission-critical
applications where failure is not an option.
Efficient Resource Utilization: RTOS maximizes the use of system resources by keeping all devices
and systems active.
Minimal Overhead: With a small footprint and lightweight design, RTOS is efficient and fast,
allowing for superior performance in constrained environments.
Disadvantages of RTOS
Complex Development: Developing applications for RTOS can be more complex and require
specialized knowledge.
Limited Task Handling: RTOS is efficient at managing a small number of scheduled tasks but less
efficient at handling a large number of tasks or extensive multi-tasking.
Resource Intensive: Ensuring deterministic performance may require more processing power and
memory, which can increase costs.
Less Flexibility: RTOS may offer less flexibility compared to general-purpose operating systems,
limiting its use to specific types of applications.
Virtual Environment: A VM operates like a standalone computer with its own operating system
(OS) and applications, running independently from the host system and other VMs.
Isolation: Each VM is isolated from the host system and other VMs, providing a sandbox
environment where applications can run without affecting the host or other VMs.
Hypervisor Management: A hypervisor, or virtual machine monitor (VMM), manages the creation
and execution of VMs. It allocates resources from the host system to each VM, ensuring they
operate efficiently.
Portability: VMs are portable and can be moved or copied between different physical machines,
making them useful for tasks like testing, development, and disaster recovery.
Q5: Discuss the working of system call with the help of diagram?
Ans : A system call is a way for programs to interact with the operating system (OS). It provides
the interface between a process and the OS kernel, enabling user-level processes to request
services from the OS.
Request Initiation: A user application initiates a system call by executing a specific instruction or
function call that switches the execution context from user mode to kernel mode.
Switch to Kernel Mode: The system call instruction causes a software interrupt or a trap, switching
the CPU from user mode to kernel mode. This is necessary because certain operations, such as
accessing hardware devices or memory management, require elevated privileges.
Execute System Call: The OS kernel identifies the requested service from the system call number
(or ID) and executes the corresponding system call handler. The handler performs the required
operations, such as reading from a file, creating a process, or allocating memory.
Return to User Mode: After completing the requested operation, the kernel returns control to the
user application. The results of the system call (such as return values or error codes) are passed
back to the user process.
Continuation of Execution: The user application continues its execution with the results provided
by the system call.
Uniprogramming
Uniprogramming refers to a system where only one program is loaded into the main memory
and executed by the CPU at any given time. The system waits for the currently running program
to finish before starting the next one. This approach was common in early computing systems.
Disadvantages: Inefficient CPU utilization since the CPU remains idle during I/O operations [5].
Multiprogramming
Multiprogramming allows multiple programs to reside in the main memory simultaneously and
share the CPU. The OS manages the scheduling of these programs, switching the CPU among
them to improve utilization. While one program waits for I/O operations, the CPU can execute
another program.
Disadvantages: More complex OS design, potential issues with synchronization and resource
management.
Parallel Programming
Parallel programming involves dividing a program into multiple parts that can be executed
simultaneously on multiple processors or cores. This approach can significantly speed up
processing times for tasks that can be parallelized.
Jobs are processed in batches and output is obtained after the completion of the batch.
Multiple programs reside in memory simultaneously, sharing the CPU to increase utilization and
throughput.
Utilizes multiple CPUs to execute multiple processes concurrently, improving performance and
reliability.
Provides immediate processing and response to input. It is used in systems requiring stringent
timing constraints, such as embedded systems.
Manages a group of independent computers and makes them appear to be a single coherent
system.
A single large kernel that handles all OS functionalities, providing rich and comprehensive
services directly.
Example: UNIX.
Layered Structure:
Divides the OS into layers, each built on top of lower layers, simplifying debugging and system
verification.
Micro-Kernel Structure:
Minimizes the kernel by running most services in user space, improving modularity and
reliability.
Modules:
Uses loadable kernel modules to extend functionality without rebooting, offering flexibility and
efficiency.
Example: Linux.
Hybrid Structure:
Exo-Kernel:
Manages system resources such as CPU, memory, and I/O devices, ensuring efficient and fair
distribution among all running processes.
Process Management:
Handles the creation, scheduling, and termination of processes, deciding which process should
be executed by the CPU at any given time.
Memory Management:
Allocates and deallocates memory space as needed by different processes, ensuring optimal
memory usage and system stability.
Device Management:
Acts as an intermediary between hardware devices and application software, managing I/O
operations and ensuring smooth communication with peripherals.
System Calls:
Provides an interface for user applications to request services from the operating system, such
as file operations, network communication, and inter-process communication.
During system startup, the kernel is loaded into memory and begins initializing the hardware
components and system resources.
Scheduling:
The kernel employs various scheduling algorithms to determine the order in which processes
access the CPU, balancing performance and fairness.
Interrupt Handling:
Context Switching:
Saves the state of a currently running process and loads the state of the next process to be
executed, enabling multitasking and efficient CPU usage.