Operating System Lab Reports

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Table of Contents

❖ Process and Threads


➢ Process Handling
➢ Threads Handling

❖ Priority Scheduling:
➢ First-Come-First-Serve(FCFS)
➢ Priority Scheduling
➢ Round Robin (RR)

❖ INTER PROCESS COMMUNICATION:


➢ The Dining Philosopher
➢ Producer-Consumer Problem
➢ Sleeping Barber Problem

❖ Deadlocks Avoidance
➢ Banker’s Algorithm
Lab1
Process And Threads

Objective
The objective of this lab is to gain a comprehensive understanding of
processes and threads in the context of computer programming.
Through hands-on experimentation, we aim to explore the concepts of
processes and threads, their characteristics, and their practical
implications in concurrent programming.

Task 1: Processes
Introduction:
A process is an independent, self-contained unit of
execution with its own memory space, resources, and execution
environment. Processes are managed by the operating system, providing
isolation and security.
CreateProcess() is the function used to Create Processes.
Program

Output
Threads

Introduction
Threads, on the other hand, are lightweight units of execution within a
process. Unlike processes, threads share the same memory space,
allowing them to communicate more efficiently. Threads are useful for
parallelizing tasks and enhancing program performance.
pthread_create() is the function used to Create Processes.

Program
Output

Conclusion:
In conclusion, this lab provided a hands-on exploration of processes and
threads, essential components of concurrent programming. Through
practical implementation, we observed the creation and execution of
processes and threads, understanding their roles in improving program
performance and responsiveness. The ability to utilize concurrent
programming paradigms is crucial for developing efficient and scalable
software systems.
Lab 2

PROCESS SCHEDULING ALGORITHMS

Objective
The objective of this lab experiment is to explore and understand various
process scheduling algorithms used in operating systems. The focus will
be on gaining practical insights into how these algorithms affect system
performance and responsiveness. The experiment aims to implement
and analyze three commonly used scheduling algorithms: First-Come-
First-Serve (FCFS), Shortest Job Next (SJN) or Shortest Job First (SJF), and
Round Robin (RR).
First-Come-First-Serve(FCFS)

Introduction
FCFS is a simple scheduling algorithm that selects processes in the order
they arrive in the ready queue. It is non-preemptive, meaning once a
process starts execution, it continues until completion. FCFS suffers from
the "convoy effect," where shorter processes may be delayed by longer
ones.
Program

Output
Task 2
Priority Scheduling

Introduction
Priority Scheduling is a non-preemptive scheduling algorithm where
each process is assigned a priority, and the process with the highest
priority is selected for execution first. This algorithm allows processes
with higher priority to be executed before those with lower priority,
potentially improving system responsiveness and efficiency.

Program
Output
Task 3
Round Robin (RR)

Introduction
Round Robin is a preemptive scheduling algorithm where each process
is assigned a fixed time slot or quantum. Processes are executed in a
circular manner, with each getting a turn to run for the allotted time. If
a process's burst time exceeds the quantum, it is moved to the back of
the queue.

Program
Output

Conclusion
The lab experiment provided valuable insights into the performance
characteristics of different process scheduling algorithms. FCFS,
although simple, may suffer from the convoy effect. SJN minimizes
waiting time by selecting the shortest job, but it requires accurate burst
time prediction. RR, with its fixed time slots, ensures fairness but may
lead to higher turnaround times. The choice of scheduling algorithm
depends on the specific requirements and characteristics of the system.
Lab 3
INTER PROCESS COMMUNICATION

Objective
The objective of this lab experiment is to study and implement solutions
to three classic concurrency problems: the Dining Philosopher problem,
the Producer-Consumer problem, and the Sleeping Barber problem.
These problems provide insight into synchronization and communication
challenges in concurrent programming, offering practical experience in
designing solutions to prevent issues like deadlock, starvation, and
resource contention.

Task 1
The Dining Philosopher

Introduction
The Dining Philosopher problem is a classic synchronization problem
where a group of philosophers sits around a dining table with a bowl of
spaghetti in front of each. To eat, a philosopher needs two adjacent
forks. The challenge is to design a solution that prevents deadlocks and
starvation, ensuring each philosopher can eat without conflicts.
Program

Output
Task 2
Producer-Consumer Problem

Introduction
The Producer-Consumer problem involves two types of processes -
producers that produce item and place them in a shared buffer, and
consumers that takes item form the buffer and consume them. The
challenge is to synchronize the access to the shared buffer, preventing
issues like buffer overflow or underflow and ensuring that producers and
consumers don’t interfere with each other.

Program
Output
Task 3
Sleeping Barber Problem
Introduction
The Sleeping Barber problem simulates a barbershop with one barber
and multiple customers. The barber alternates between sleeping and
cutting hair. Customers arrive, check for an available seat, and either
wait or leave if there are no seats. The challenge is to manage the access
to available seats and ensure proper synchronization between the
barber and customers.

Program
Output

Conclusion
In conclusion, this lab experiment provided valuable insights into solving
classic concurrency problems. The implementation and analysis of
solutions for the Dining Philosopher, Producer-Consumer, and Sleeping
Barber problems helped understand the challenges of synchronization,
resource sharing, and communication in concurrent programming.
Lab 4
Deadlocks Avoidance
Objective
The objective of Deadlocks Avoidance in operating systems is to prevent
situations where multiple processes are unable to proceed because each
is waiting for a resource held by another, resulting in a deadlock. By
implementing strategies such as resource allocation algorithms,
deadlock detection, and avoidance techniques, the goal is to ensure
system-wide progress and resource utilization without the risk of
deadlock formation.
Task 1:
Banker’s Algorithm
Introduction
The Banker's Algorithm is a resource allocation and deadlock avoidance
technique used in operating systems. It aims to allocate resources to
processes in a way that avoids deadlock while ensuring the safety and
progress of the system. The Banker's Algorithm ensures that resources
are allocated in a manner that prevents deadlock by considering the
maximum resource needs of processes and only granting resource
requests that maintain system safety. It's akin to a banker carefully
managing loans to customers in a way that avoids insolvency and ensures
the stability of the financial system.
Program
Output
Lab – 5
Page Replacement Algorithm

Task-1
Introduction
The FIFO page replacement algorithm works on the principle of evicting
the oldest page in memory. When a page needs to be replaced due to a
page fault, the page that has been in memory the longest (i.e., the first
page brought into memory) is chosen for eviction.

Code
Output

Conclusion

The FIFO page replacement algorithm is a simple and straightforward


approach to managing memory in computer systems. While it performs
adequately under certain conditions, its performance may degrade in the face
of unpredictable or random memory access patterns.
Task-2
Optimal page replacement algorithm
Introduction
In virtual memory management, when a program references a page that is not currently in physical
memory, a page fault occurs. The page replacement algorithm is responsible for selecting which
page to evict from memory to make room for the incoming page. The Optimal algorithm
theoretically evicts the page that will not be referenced for the longest time in the future. Although
optimal in minimizing page faults, this algorithm is impractical due to its requirement for future
knowledge of memory accesses.

Code
Output

Conclusion
The Optimal Page Replacement Algorithm, while providing the best possible
performance in terms of minimizing page faults, is not practical for real-world
implementations due to its requirement for future knowledge of memory
accesses. However, it serves as a theoretical benchmark for evaluating the
performance of other page replacement algorithms.
Task-3
LRU Page replacement algorithm

Introduction
In virtual memory management, when a program references a page that is not currently in physical
memory, a page fault occurs. The page replacement algorithm is responsible for selecting which
page to evict from memory to make room for the incoming page. The LRU algorithm selects the
page that has not been accessed for the longest time, making it a practical and efficient approach to
minimize page faults.

CODE
OUTPUT

Conclusion
In virtual memory management, when a program references a page that is not
currently in physical memory, a page fault occurs. The page replacement
algorithm is responsible for selecting which page to evict from memory to
make room for the incoming page. The LRU algorithm selects the page that
has not been accessed for the longest time, making it a practical and efficient
approach to minimize page faults.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy