OS Lab Report
OS Lab Report
MARA
FAKULTI KEJURUTERAAN
MEKANIKAL
TRIBHUVAN UNIVERSITY
Institute of Management studies
LAB REPORT
-----------------------------------
SUPERVISIOR
Objective:
To implement the FIFO (First-In-First-Out) page replacement algorithm in C and analyze its
effectiveness in managing memory in an operating system.
Related Theory:
The FIFO page replacement algorithm replaces the oldest page in memory first. Pages are
loaded into memory in the order they are requested. When a page fault occurs, the page
that has been in memory the longest (i.e., the first one to enter) is removed.
Advantages:
FIFO is straightforward and easy to implement, requiring only a queue to keep track
of the pages.
The behavior of the algorithm is predictable since it follows a strict order.
Disadvantages:
FIFO can have a problem where increasing the number of page frames leads to more
page faults, contrary to expectations.
FIFO does not consider the frequency of use or the importance of pages, which can
lead to inefficient memory management.
Code in C:
#include<stdio.h>
int main() {
int i, j, n, a[50], frame[10], no, k, avail, count = 0;
Output:
Discussion:
The FIFO page replacement algorithm was successfully implemented, and its performance
was evaluated using a sample reference string. The algorithm replaces the oldest page in
memory when a page fault occurs, which is simple but can lead to inefficiencies.
14
The total number of page faults in the about output is 5. This is influenced by the number of
frames available and the order of page references.
Although not observed in this example, it is important to note that FIFO can sometimes
exhibit a problem, where adding more frames increases the number of page faults.
Conclusion:
The FIFO page replacement algorithm is one of the simplest methods for managing memory
in an operating system. It operates on the principle of replacing the oldest page in memory
when a page fault occurs. While easy to implement, FIFO can be less suitable for
environments where memory efficiency is critical. Nonetheless, FIFO serves as a
fundamental concept in understanding more advanced page replacement algorithms.
15
Implementation of LRU (Least Recently
Used) Page Replacement in Memory
Management
Objective:
To implement the Least Recently Used (LRU) page replacement algorithm in C and analyze its
efficiency in managing memory, particularly in reducing page faults in an operating system.
Related Theory:
The LRU algorithm replaces the page that has not been used for the longest period. It
assumes that pages used recently will likely be used again soon, while those not used for a
while are less likely to be needed.
Advantages:
1. LRU generally results in fewer page faults compared to FIFO (First-In-First-Out).
2. LRU is closer to optimal page replacement in practical scenarios.
Disadvantages:
1. Tracking the least recently used pages requires additional data structures, which can
increase overhead.
2. Efficient implementation of LRU may require hardware support, like counters or
stacks, to keep track of usage history.
Code in C:
#include <stdio.h>
int main() {
int no_of_frames, no_of_pages, frames[10], pages[30];
int counter = 0, time[10], flag1, flag2, i, j, pos, faults = 0;
16
for(i = 0; i < no_of_frames; ++i) {
frames[i] = -1;
}
// LRU Page Replacement Algorithm
for(i = 0; i < no_of_pages; ++i) {
flag1 = flag2 = 0;
// Check if page is already in a frame
for(j = 0; j < no_of_frames; ++j) {
if(frames[j] == pages[i]) {
counter++;
time[j] = counter;
flag1 = flag2 = 1;
break;
}
}
// If page is not in a frame, look for an empty frame
if(flag1 == 0) {
for(j = 0; j < no_of_frames; ++j) {
if(frames[j] == -1) {
counter++;
faults++;
frames[j] = pages[i];
time[j] = counter;
flag2 = 1;
break;
}
}
}
// If no empty frame, replace the least recently used page
if(flag2 == 0) {
pos = 0;
for(j = 1; j < no_of_frames; ++j) {
if(time[j] < time[pos]) {
pos = j; // Find the LRU page
}
}
counter++;
faults++;
frames[pos] = pages[i]; // Replace the LRU page
time[pos] = counter;
}
printf("\n");
for(j = 0; j < no_of_frames; ++j) {
if(frames[j] == -1) {
printf("-\t");
} else {
printf("%d\t", frames[j]);
}
}
}
17
printf("\n\nTotal Page Faults = %d\n", faults);
return 0;
}
Output:
Discussion:
The LRU page replacement algorithm effectively reduces the number of page faults by
replacing the least recently used pages in memory. This strategy is based on the assumption
that pages used recently are more likely to be used again soon.
The total number of page faults in this example is 9. The output generally shows low page
faults compared to FIFO.
LRU generally performs better than FIFO because it takes into account the recent usage of
pages. The efficiency of LRU can vary depending on the workload and the size of the page
frames. While LRU is more efficient than FIFO, it is not always the most optimal, particularly
in scenarios with irregular access patterns.
Conclusion:
The LRU page replacement algorithm is a practical and efficient approach to memory
management, particularly in reducing page faults in systems with predictable access
patterns. While LRU requires more complex implementation compared to FIFO, it offers
significant performance improvements by considering the recency of page usage.
18
Implementation of Priority Scheduling in C
Objective:
To implement the Priority Scheduling algorithm in C and analyze how processes are executed
based on their priority levels. The aim is to understand how Priority Scheduling works and
evaluate its performance in terms of waiting time and turnaround time.
Related Theory:
Priority scheduling is a CPU scheduling algorithm where processes are assigned priorities,
and the process with the highest priority is selected for execution. In priority scheduling, the
scheduler chooses the process with the highest priority number from the ready queue and
allocates the CPU to that process. The priority of a process is usually determined by its
importance, the amount of CPU time it needs, or its deadline.
Priority scheduling can be divided into two types: preemptive and non-preemptive. In
preemptive priority scheduling, the currently running process may be interrupted by a
higher priority process, while in non-preemptive priority scheduling, once a process is
assigned the CPU, it will continue to execute until it finishes or is blocked.
Advantages:
Allows important tasks to be completed first.
Flexibility in scheduling based on process importance.
Disadvantages:
Can lead to starvation if lower-priority processes are continually delayed by higher-
priority processes.
Requires careful assignment of priorities.
Code in C:
#include <stdio.h>
#define MAX 5
int main() {
int i, j, n, temp;
int p[MAX], bt[MAX], pr[MAX], wt[MAX], tat[MAX];
int total_wt = 0, total_tat = 0;
float avg_wt = 0, avg_tat = 0;
// Input number of processes
printf("Enter the number of processes: ");
scanf("%d", &n);
for (i = 0; i < n; i++) {
printf("Enter process ID, burst time, and priority for process %d: ", i + 1);
scanf("%d %d %d", &p[i], &bt[i], &pr[i]);
19
}
// Sort processes based on priority using bubble sort
for (i = 0; i < n - 1; i++) {
for (j = 0; j < n - i - 1; j++) {
if (pr[j] > pr[j + 1]) {
temp = pr[j];
pr[j] = pr[j + 1];
pr[j + 1] = temp;
temp = bt[j];
bt[j] = bt[j + 1];
bt[j + 1] = temp;
temp = p[j];
p[j] = p[j + 1];
p[j + 1] = temp;
}
}
}
// Calculate waiting time and turnaround time for each process
printf("Process ID\tBurst Time\tPriority\tWaiting Time\tTurnaround Time\n");
for (i = 0; i < n; i++) {
wt[i] = 0;
tat[i] = 0;
for (j = 0; j < i; j++) {
wt[i] += bt[j];
}
tat[i] = wt[i] + bt[i];
total_wt += wt[i];
total_tat += tat[i];
return 0;
}
Output:
20
Discussion:
In this implementation, the processes are scheduled based on their priority, with the lowest
priority value indicating the highest priority. For example, Process 1, with the highest
priority (priority 1), is executed first, and followed by Process 3 (priority 2).
The average waiting time is calculated to be 5.00 units, while the average turnaround time is
8.67 units.
Processes with higher priorities are executed first, reducing their waiting and turnaround
times. However, lower-priority processes may experience longer waiting times, particularly
if many high-priority processes arrive.
This highlights a potential issue with Priority Scheduling: starvation. If many high-priority
processes arrive continuously, lower-priority processes may never get a chance to execute.
Conclusion:
Priority Scheduling is an effective algorithm for scenarios where certain tasks must be
prioritized. It offers flexibility in handling processes with different levels of importance.
However, the risk of starvation for lower-priority processes must be managed, potentially
through techniques like aging. Overall, Priority Scheduling provides a balance between
Efficiency and fairness when properly implemented.
21
Implementation of Round Robin Scheduling
in C
Objective:
To implement the Round Robin Scheduling algorithm in C and analyze how processes are
executed based on the quantum time. The aim is to understand how Round Robin
Scheduling works and evaluate its performance in terms of waiting time and turnaround
time.
Related Theory:
Round Robin Scheduling is a preemptive CPU scheduling algorithm that assigns a fixed time
unit called a time quantum to each process in a cyclic order. If a process does not finish
within its allocated time quantum, it is moved to the end of the queue, and the next process
starts executing.
Advantages:
Ensures all processes are treated equally.
Suitable for interactive systems where response time is critical.
Disadvantages:
Context switching between processes can be excessive.
If the time quantum is too large, Round Robin behaves like FCFS and if too
small, there is too much context switching.
Code in C:
#include <stdio.h>
int main() {
int n,i,qt,count=0,temp,sq=0,bt[10],wt[10],tat[10],rem_bt[10];
float awt=0,atat=0;
printf("Enter the no of processes: ");
scanf("%d",&n);
printf("\nEnter the burst time of process:\n");
for(i=0;i<n;i++){
scanf("%d",&bt[i]);
rem_bt[i]=bt[i];
}
22
printf("Enter the quantam time: ");
scanf("%d",&qt);
while(1){
for(i=0,count=0;i<n;i++){
temp=qt;
if(rem_bt[i]==0){
count++;
continue;
}
if(rem_bt[i]>qt){
rem_bt[i]=rem_bt[i]-qt;
}
else if(rem_bt[i]>=0){
temp=rem_bt[i];
rem_bt[i]=0;
}
sq=sq+temp;
tat[i]=sq;
}
if(n==count){
break;
}
}
awt=awt/n;
atat=atat/n;
printf("\n\nAverage waiting time: %.2f\n",awt);
printf("Average turnaround time: %.2f",atat);
return 0;
}
Output:
23
Discussion:
In this implementation of Round Robin scheduling, each process gets a time quantum of 2
units. The CPU cycles through the processes, allowing each one to execute for 2 units of
time unless it completes earlier. If a process does not finish within its time quantum, it is
moved to the end of the queue, and the next process is scheduled.
The algorithm fairly allocates CPU time among all processes, which is ideal for time-sharing
environments.
The average waiting time in the above output is: 5.00 units
The average turnaround time in the above output is: 8.00 units
Conclusion:
Round Robin Scheduling is a widely-used algorithm in time-sharing systems due to its
fairness and simplicity. This implementation highlights its key advantages, such as equal
process handling and responsiveness. However, the time quantum selection plays a
significant role in determining the algorithm's performance. Properly tuning the time
quantum can balance between fairness and efficiency, making Round Robin an effective
scheduling strategy for interactive systems.
Choosing an optimal time quantum is crucial. A time quantum that is too small will lead to
excessive context switching, while a time quantum that is too large can make the Round
Robin algorithm behave like the First-Come, First-Served (FCFS) algorithm.
24
Implementation of C-LOOK (Circular LOOK)
Disk Arm Scheduling
Objective:
To implement and analyze the performance of the C-LOOK (Circular LOOK) Disk Arm
Scheduling algorithm in C.
Related Theory:
C-LOOK is a variant of the LOOK algorithm. Unlike the standard LOOK algorithm, which
reverses direction upon reaching the last request in one direction, C-LOOK only moves in
one direction and then jumps back to the first request without servicing any in between. It
treats the disk as a circular list of requests but avoids the unnecessary traversal to the very
end of the disk, focusing only on the actual requests.
Advantages:
3. Like LOOK, C-LOOK avoids unnecessary movement to the edges of the disk, further
optimizing seek time.
4. By servicing requests in a circular fashion, C-LOOK ensures that no requests are left
behind, reducing the risk of starvation.
Disadvantages:
3. The algorithm can be slightly more complex to implement compared to simpler
algorithms like FCFS.
Code in C:
#include <stdio.h>
#include <stdlib.h>
#define LOW 0
#define HIGH 199
int main(){
int queue[20], head, q_size, i,j, seek=0, diff, max, min, range, temp, queue1[20], queue2[20],
temp1=0, temp2=0;
float avg;
25
temp1++;
} else {
queue2[temp2] = temp;
temp2++;
}
}
for(i=0; i<temp1-1; i++){ //sort queue1 - increasing order
for(j=i+1; j<temp1; j++){
if(queue1[i] > queue1[j]){
temp = queue1[i];
queue1[i] = queue1[j];
queue1[j] = temp;
}
}
}
for(i=0; i<temp2-1; i++){ //sort queue2
for(j=i+1; j<temp2; j++){
if(queue2[i] > queue2[j]){
temp = queue2[i];
queue2[i] = queue2[j];
queue2[j] = temp;
}
}
}
if(abs(head-LOW) <= abs(head-HIGH)){
26
}
range = max - min;
printf("\n\nRange is %d\n", range);
printf("Total seek time is %d\n", seek); //seek = seek - (max - min);
avg = seek/(float)q_size;
printf("Average seek time is %f\n", avg);
return 0;
}
Output:
Discussion:
The C-LOOK Disk Scheduling algorithm optimizes the LOOK algorithm by minimizing
unnecessary disk movement. By treating the disk requests as a circular list, C-LOOK ensures
fairness and reduces seek time.
C-LOOK is efficient in terms of reducing seek time by avoiding unnecessary traversal to the
disk's edges.
By treating the disk requests circularly, C-LOOK ensures that all requests are eventually
serviced, reducing the risk of starvation.
Conclusion:
The C-LOOK Disk Scheduling algorithm is a powerful optimization of the LOOK algorithm,
reducing unnecessary disk movement and minimizing seek time. It ensures fairness by
servicing requests in a circular fashion, making it a suitable choice for systems that require
balanced and efficient disk scheduling. Compared to other algorithms like FCFS and SSTF, C-
LOOK provides better performance by optimizing the disk arm's movement and ensuring
that all requests are serviced in a balanced manner.
27
Implementation of FCFS (First Come First
Served) in Disk arm scheduling
Objective:
To implement and analyze the performance of the First-Come, First-Served (FCFS) Disk Arm
Scheduling algorithm in C.
Related Theory:
In FCFS (First-Come, First-Served) disk scheduling, requests are processed in the order they
arrive. The disk arm moves according to the sequence of requests without considering the
current position or proximity to other requests.
Advantages:
5. FCFS is easy to understand and implement.
6. Every request gets a chance to be processed without indefinite delay.
Disadvantages:
4. FCFS can lead to high total seek time, especially if the requests are scattered across
different parts of the disk.
5. It does not optimize the movement of the disk arm, leading to inefficient disk usage.
Code in C:
#include <stdio.h>
#include <stdlib.h>
int main() {
int n,i,head,total_movement = 0;
printf("Enter the number of disk requests: "); // Input the number of disk requests
scanf("%d", &n);
int requests[n];
printf("Enter the disk requests: "); // Input the disk requests
for(i = 0; i < n; i++) {
scanf("%d", &requests[i]);
}
printf("Enter the initial position of the disk head: "); // Input the initial position of the disk head
scanf("%d", &head);
printf("Disk head movement:\n"); // FCFS Disk Scheduling
for(i = 0; i < n; i++) {
printf("%d->%d\n",head,requests[i]);
total_movement += abs(requests[i] - head);
head = requests[i];
28
}
printf("\n\nTotal head movement = %d", total_movement); // Output the total head
movement
return 0;
}
Output:
Discussion:
The FCFS Disk Scheduling algorithm is simple to implement and ensures that requests are
processed in the order they arrive. There is inefficiency due to the algorithm's lack of
optimization, where the disk arm may move back and forth unnecessarily.
FCFS can perform poorly if the request sequence is not ordered optimally, leading to
increase
Seek time.
While it guarantees fairness and no starvation, it may not be suitable for high-performance
systems where minimizing seek time is critical.
Conclusion:
The FCFS Disk Scheduling algorithm is straightforward and easy to implement, making it a
good starting point for understanding disk scheduling. However, its performance in terms of
seek time can be not so efficient compared to more advanced algorithms like SSTF or SCAN.
In real-world applications where disk performance is critical, FCFS may not be the best
choice due to its high total head movement, especially with scattered requests.
29
Implementation of LOOK Disk Arm
Scheduling
Objective:
To implement and analyze the performance of the LOOK Disk Arm Scheduling algorithm in C.
Related Theory:
The LOOK algorithm is a variant of the SCAN algorithm. The disk arm "looks" ahead to check
if there are any more requests in the current direction before reversing. Unlike SCAN, it
does not go to the end of the disk; instead, it reverses when there are no more requests in
the current direction.
Advantages:
7. LOOK avoids unnecessary movement to the end of the disk, reducing total seek time.
8. Like SCAN, LOOK ensures that all requests are eventually serviced, reducing the risk
of starvation.
Disadvantages:
6. The back-and-forth movement can be less efficient in some scenarios..
Code in C:
#include <stdio.h>
#include <stdlib.h>
#define LOW 0
#define HIGH 199
int main(){
int queue[20], head, q_size, i,j, seek=0, diff, max, temp, queue1[20], queue2[20], temp1=0,
temp2=0;
float avg;
}
printf("Total seek time is %d\n", seek);
avg = seek/(float)q_size;
printf("Average seek time is %f\n", avg);
return 0;
}
Output:
31
Discussion:
The LOOK Disk Scheduling algorithm optimizes the SCAN algorithm by reducing unnecessary
movement to the ends of the disk. It "looks" ahead and reverses direction when no further
requests are in the current direction. This reduces total seek time and improves efficiency.
LOOK reduces the overall seek time by avoiding unnecessary movement to the disk's ends.
Like SCAN, LOOK ensures that all requests are eventually serviced, reducing the risk of
starvation.
Conclusion:
The LOOK Disk Scheduling algorithm is an effective optimization of the SCAN algorithm,
reducing unnecessary movement and minimizing total seek time. It ensures fairness and
prevents starvation, making it a suitable choice for disk scheduling in operating systems.
Compared to other algorithms like FCFS and SSTF, LOOK provides better performance by
optimizing the disk arm's movement and ensuring that all requests are serviced in a
balanced manner.
32
Implementation of SCAN (Elevator) Disk
Arm Scheduling
Objective:
To implement and analyze the performance of the SCAN (Elevator) Disk Arm Scheduling
algorithm in C.
Related Theory:
The SCAN algorithm moves the disk arm in one direction, servicing all requests in its path,
until it reaches the end of the disk. Then, it reverses direction and services requests on the
way back, like an elevator going up and down.
Advantages:
9. Unlike SSTF, SCAN ensures that all requests will eventually be serviced.
10. SCAN provides a more balanced performance, especially in systems with a large
number of requests.
Disadvantages:
7. Requests at the ends of the disk can experience longer wait times.
8. The back-and-forth movement can be inefficient in certain scenarios.
Code in C:
#include <stdio.h>
#include <stdlib.h>
#define LOW 0
#define HIGH 199
int main(){
int queue[20];
int head, max, q_size, temp, sum, i, j;
int dloc; //location of disk (head) arr
33
if(queue[i]>queue[j]){
temp = queue[i];
queue[i] = queue[j];
queue[j] = temp;
}
}
}
max = queue[q_size-1];
for(i=0; i<q_size; i++){ //locate head in the queue
if(head == queue[i]){
dloc = i;
break;
}
}
if(abs(head-LOW) <= abs(head-HIGH)){
for(j=dloc; j>=0; j--){
printf("%d --> ",queue[j]);
}
for(j=dloc+1; j<q_size; j++){
printf("%d --> ",queue[j]);
}
}else {
for(j=dloc+1; j<q_size; j++){
printf("%d --> ",queue[j]);
}
for(j=dloc; j>=0; j--){
printf("%d --> ",queue[j]);
}
}
sum = head + max;
printf("\nMovement of total cylinders %d", sum);
return 0;
}
Output:
Discussion:
34
The SCAN Disk Scheduling algorithm provides a balanced approach to servicing disk requests
by scanning in one direction and then reversing. This approach helps reduce the starvation
problem that can occur in SSTF and ensures that all requests are serviced in a timely
manner.
SCAN provides a more balanced performance, reducing the variance in seek time across
requests.
By servicing requests in both directions, SCAN prevents starvation and ensures fairness.
Conclusion:
The SCAN Disk Scheduling algorithm effectively reduces seek time by moving the disk arm in
one direction, servicing requests in its path. It balances performance and fairness, making it
a good choice for systems with a wide range of disk requests. SCAN is more efficient than
FCFS and prevents starvation better than SSTF, making it a preferred choice for many disk
scheduling scenarios. However, in cases where requests are heavily skewed towards one side
of the disk, C-SCAN may offer better performance.
35
Implementation of SSTF (Shortest Seek
Time First) Disk Arm Scheduling
Objective:
To implement and analyze the performance of the Shortest Seek Time First (SSTF) Disk Arm
Scheduling algorithm in C.
Related Theory:
SSTF Disk Scheduling selects the disk I/O request that is closest to the current position of the
disk arm, minimizing the seek time for each request.
Advantages:
11. SSTF reduces the overall seek time compared to FCFS by prioritizing requests that are
closest to the current head position.
12. By minimizing unnecessary movement, SSTF optimizes disk usage and improves
performance..
Disadvantages:
9. SSTF may lead to starvation of requests that are far from the current disk head
position if closer requests keep arriving.
10. The algorithm is more complex than FCFS, as it requires calculating the shortest seek
time for each request.
Code in C:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
int queue[100], queue2[100], q_size, head, seek=0, temp,i,j;
float avg;
36
for(i=0; i<q_size; i++){
queue2[i] = abs(head-queue[i]);
}
//swap elements based on their distance from each other
for(i=0; i<q_size; i++){
for(j=i+1; j<q_size;j++){
if(queue2[i]>queue2[j]){
temp = queue2[i];
queue2[i]=queue[j];
queue2[j]=temp;
temp=queue[i];
queue[i]=queue[j];
queue[j]=temp;
}
}
}
for(i=1; i<q_size; i++){
seek = seek+abs(head-queue[i]);
head = queue[i];
}
printf("\nTotal seek time is %d\t",seek);
avg = seek/(float)q_size;
printf("\nAverage seek time is %f\t", avg);
return 0;
}
Output:
Discussion:
The SSTF Disk Scheduling algorithm optimizes the disk arm movement by selecting the
closest pending request. This reduces the total seek time compared to FCFS, making SSTF
37
more efficient for disk usage. However, it has some limitations, such as the possibility of
starvation for requests that are far from the current head position.
SSTF significantly reduces the total head movement compared to FCFS, resulting in faster
access times.
If new requests close to the head keep arriving, requests farther away might get delayed
indefinitely.
Conclusion:
The SSTF Disk Scheduling algorithm is an effective method for minimizing the seek time in
disk scheduling by prioritizing the closest pending requests. It is more efficient than FCFS,
particularly in systems with high disk activity. However, care must be taken to handle the
potential issue of starvation. SSTF serves as a good balance between simplicity and
efficiency, making it a commonly used algorithm in disk scheduling scenarios.
38
Linux Basic Commands Implementation and
Examples
Objective:
The objective of this lab report is to get familiarized with basic Linux commands and
understand their functionalities.
Related Theory:
Linux is a powerful operating system that provides a command-line interface (CLI) for users
to interact with the system. The CLI allows users to execute commands to perform specific
tasks. Linux commands are case-sensitive and can be combined with various options to
enhance their functionalities.
The importance of learning Linux commands lies in their efficiency, flexibility, and control
they provide over the system. Basic Linux commands include operations such as navigating
the file system, managing files and directories, viewing system information, and handling
processes
Output:
Output:
39
4. Command: mkdir (Make Directory)
Description: Creates a new directory.
Terminal Input:
Output:
Output:
Discussion:
The pwd, ls, and cd commands are essential for navigation within the Linux file system. File
management commands like mkdir, touch, cat, rm, cp, and mv are crucial for creating,
viewing, deleting, copying, and moving files and directories. The grep command is used for
searching within files, a powerful tool for text processing.
Understanding and mastering these commands is critical for efficient file and system
management in Linux. These basic commands lay the foundation for more advanced
operations, enabling users to automate tasks and manage systems more effectively.
Conclusion:
The lab on basic Linux commands provided hands-on experience with essential command-
line tools that are fundamental to working within a Linux environment. Through this lab, I
learned how to navigate the file system, manage files and directories, and perform basic text
searches.
41