OS Lab Manual
OS Lab Manual
II. Priority-based
I. Worst-Fit
1. Introduction
Hardware Requirements:
Software Requirements:
Memory and Storage Requirements: UNIX distributions may have specific memory
and storage requirements depending on the installation options and the intended use
case.
Graphics and Display Requirements: UNIX systems typically support a wide range of
graphics hardware, including integrated and discrete graphics cards.
Network and Connectivity Requirements: UNIX systems support various network
interfaces and protocols, ensuring compatibility with a diverse range of networking
hardware.
Hardware Requirements:
Processor Architecture Support: LINUX is renowned for its wide support for different
processor architectures, making it versatile across various hardware platforms.
Memory and Disk Space Requirements: LINUX distributions vary in their memory and
disk space requirements, with modern distributions typically requiring at least 1GB of
RAM and several gigabytes of disk space for installation.
Graphics and Display Support: LINUX provides robust support for graphics hardware,
including both open-source and proprietary graphics drivers for different graphics cards.
Networking Support: LINUX supports a vast array of network hardware and protocols,
making it suitable for a wide range of networking applications.
Hardware Requirements:
Software Requirements:
Processor Architecture Support: Windows XP primarily supports x86 (32-bit) processor
architectures, with limited support for other architectures in specific editions.
Network and Connectivity Support: Windows XP includes drivers for many common
network adapters and supports various networking protocols for wired and wireless
connectivity.
Hardware Requirements:
Processor Architecture Support: Windows 7/8 support both 32-bit and 64-bit
processor architectures, providing compatibility with a wide range of hardware.
Memory and Storage Requirements: Windows 7/8 require more RAM and disk space
compared to Windows XP, with recommended specifications often exceeding 2GB of
RAM and 20-30GB of available disk space.
Graphics and Display Support: Windows 7/8 offer advanced graphics capabilities,
including support for high-definition displays, DirectX graphics technology, and
advanced display drivers.
Process management.
File management.
#include <stdio.h>
#include <unistd.h>
int main()
{
int i, pid;
pid = fork();
if (pid == 0)
{
for (i = 0; i < 20; i++)
{
sleep(2);
printf(" from Child process %d\n", i);
}
}
else
{
for (i = 0; i < 20; i = i + 2)
{
sleep(2);
printf(" from Parent process %d\n", i);
}
}
return 0;
}
Output:-
ii. Example program for example of exec()-
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main()
{
int i;
char *p[] = {"./hello", NULL};
int pid;
pid = fork();
if (pid == 0)
{
for (i = 0; i < 10; i++)
{
sleep(2);
printf(" from Child process %d\n", i);
}
}
else
{
for (i = 0; i < 10; i = i + 2)
{
sleep(2);
printf(" from parent process %d\n", i);
}
execv(p[0], p);
}
return 0;
}
Output:-
2. File Management – System Calls -
There are four system calls for file management -
a. open (): System call is used to know the file descriptor of user-created
files. Since read and write use file descriptor as their 1st parameter so to
know the file descriptor open() system call is used.
b. read (): System call is used to read the content from the file. It can also
be used to read the input from the keyboard by specifying the 0 as file
descriptor.
c. write (): System call is used to write the content to the file.
d. close (): System call is used to close the opened file, it tells the
operating system that you are done with the file and close the file.
Output in English.
Program:-
#include <fcntl.h> // for open function and O_CREAT and O_RDWR flags
#include <stdio.h>
#include <unistd.h> // for read and write functions for input from keyboard
int main()
{
int n, fd;
char buff[50];
close(fd);
return 0;
}
After Executing Program
Program:-
#include <unistd.h> // for read and write functions for input from keyboard
#include <stdio.h>
int main()
{
char c;
int a = 1, i;
while (a != 0)
{
read(0, &a, sizeof(int)); // Read integer input from keyboard
i = a;
write(1, &i, sizeof(int)); // Write the integer
write(1, "\n", 1); // Write a newline character
}
return 0;
}
Output:-
Experiment No. – 3
OBJECTIVE-Implement CPU Scheduling Policies:
I. SJF
II. Priority
III. FCFS
For SJF scheduling algorithm, read the number of processes/jobs in the system, their
CPU burst times. Arrange all the jobs in order with respect to their burst times. There
may be two jobs in queue with the same execution time, and then FCFS approach is to
be performed. Each process will be executed according to the length of its burst time.
Then calculate the waiting time and turnaround time of each of the processes
accordingly.
Program:-
#include <stdio.h>
int main()
{
int p[20], bt[20], wt[20], tat[20], i, k, n, temp;
float wtavg, tatavg;
temp = p[i];
p[i] = p[k];
p[k] = temp;
}
}
}
return 0;
}
INPUT
OUTPUT
For priority scheduling algorithm, read the number of processes/jobs in the system,
their CPU burst times, and the priorities. Arrange all the jobs in order with respect to
their priorities. There may be two jobs in queue with the same priority, and then
FCFS approach is to be performed. Each process will be executed according to its
priority. Calculate the waiting time and turnaround time of each of the processes
accordingly.
Program:-
#include <stdio.h>
int main()
{
int p[20], bt[20], pri[20], wt[20], tat[20], i, k, n, temp;
float wtavg, tatavg;
temp = bt[i];
bt[i] = bt[k];
bt[k] = temp;
temp = pri[i];
pri[i] = pri[k];
pri[k] = temp;
}
}
}
wtavg = wt[0] = 0;
tatavg = tat[0] = bt[0];
return 0;
}
INPUT-
OUTPUT-
For FCFS scheduling algorithm, read the number of processes/jobs in the system,
their CPU burst times. The scheduling is performed on the basis of arrival time of
the processes irrespective of their other parameters. Each process will be executed
according to its arrival time. Calculate the waiting time and turnaround time of each
of the processes accordingly.
Program:-
#include <stdio.h>
int main()
{
int bt[20], wt[20], tat[20], i, n;
float wtavg, tatavg;
return 0;
}
INPUT-
OUTPUT-
4. Multi-level Queue
Multi-level queue scheduling algorithm is used in scenarios where the processes can
be classified into groups based on property like process type, CPU time, IO access,
memory size, etc. In a multi-level queue scheduling algorithm, there will be 'n'
number of queues, where 'n' is the number of groups the processes are classified
into. Each queue will be assigned a priority and will have its own scheduling
algorithm like round-robin scheduling or FCFS. For the process in a queue to
execute, all the queues of priority higher than it should be empty, meaning the
process in those high priority queues should have completed its execution. In this
scheduling algorithm, once assigned to a queue, the process will not move to any
other queues.
Program:-
#include <stdio.h>
int main()
{
int p[20], bt[20], su[20], wt[20], tat[20], i, k, n, temp;
float wtavg, tatavg;
temp = bt[i];
bt[i] = bt[k];
bt[k] = temp;
temp = su[i];
su[i] = su[k];
su[k] = temp;
}
}
}
wtavg = wt[0] = 0;
tatavg = tat[0] = bt[0];
return 0;
}
INPUT-
OUTPUT-
Experiment No. – 4
OBJECTIVE-Implement file storage allocation technique-
I. Contiguous(using array)
II. Linked –list(using linked-list)
III. Indirect allocation (indexing)
1. Contiguous(using array)
Contiguous file allocation is a method of file organization where the file's data
blocks are allocated contiguously in the storage medium. In the case of contiguous
allocation using an array, the file is represented as a one-dimensional array, where
each element of the array corresponds to a block or record of the file. Each element
holds the data of a single block or record.
Program:-
#include <stdio.h>
#define MAX_FILES 30
struct FileTable
{
char name[20];
int startBlock;
int numBlocks;
};
int main()
{
struct FileTable ft[MAX_FILES];
int n;
if (n > MAX_FILES)
{
printf("Exceeded maximum number of files.\n");
return 1;
}
return 0;
}
INPUT-
OUTPUT-
To implement file allocation using a linked list with indirect allocation (indexing), we
can create a linked list structure where each node represents a block. Additionally,
each file entry will contain a pointer to the head of its linked list, which represents
the index block.
Program:-
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
int numFiles, numBlocks;
char fileName[20];
return 0;
}
INPUT-
OUTPUT-
Indirect allocation, also known as indexing, is a file allocation method where the
file's blocks are stored indirectly through an index block. Each file has an index block
that contains pointers to data blocks storing the actual file data. This method allows
for efficient management of large files by reducing the overhead of storing block
pointers directly within the file control block.
Program:-
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct IndexBlock
{
int *blockPointers;
};
struct FileEntry
{
char name[20];
int nob;
struct IndexBlock indexBlock;
};
int main()
{
int i, j, n;
char s[20];
printf("Enter no of files: ");
scanf("%d", &n);
OUTPUT-
Experiment No. – 5
OBJECTIVE-Implementation of contiguous allocation techniques:
I. Worst-Fit
1. Worst-Fit Allocation:
In the Worst-Fit allocation technique, the largest available free block of memory is
allocated to the requesting process. This method aims to minimize fragmentation by
selecting the largest block that can accommodate the process.
Algorithm:
PROGRAM:-
#include <stdio.h>
#include <stdlib.h>
#define MAX_BLOCKS 25
#define MAX_FILES 25
int main()
{
int frag[MAX_FILES], b[MAX_BLOCKS], f[MAX_FILES], bf[MAX_BLOCKS],
ff[MAX_FILES];
int i, j, nb, nf, temp;
if (worstFitIdx != -1)
{
ff[i] = worstFitIdx;
bf[worstFitIdx] = 1;
frag[i] = b[worstFitIdx] - f[i];
}
else
{
frag[i] = -1; // No suitable block found
}
}
printf("\nFile_no\tFile_size\tBlock_no\tBlock_size\tFragmentation\n");
for (i = 0; i < nf; i++)
{
printf("%d\t\t%d\t\t", i + 1, f[i]);
if (frag[i] != -1)
{
printf("%d\t\t%d\t\t%d\n", ff[i] + 1, b[ff[i]], frag[i]);
}
else
{
printf("Not allocated\tNot allocated\tNot allocated\n");
}
}
}
INPUT-
OUTPUT-
2. Best-Fit Allocation:
In the Best-Fit allocation technique, the smallest available free block of memory that
is large enough to accommodate the process is selected. This method aims to
minimize wastage by selecting the smallest suitable block.
Algorithm:
PROGRAM:-
#include <stdio.h>
#define MAX_BLOCKS 25
#define MAX_FILES 25
int main()
{
int frag[MAX_FILES], b[MAX_BLOCKS], f[MAX_FILES], bf[MAX_BLOCKS],
ff[MAX_FILES];
int i, j, nb, nf, temp, bestFitIdx;
if (bestFitIdx != -1)
{
ff[i] = bestFitIdx;
bf[bestFitIdx] = 1; // Mark block as allocated
frag[i] = temp;
}
else
{
frag[i] = -1; // No suitable block found
}
}
printf("\nFile_no\tFile_size\tBlock_no\tBlock_size\tFragmentation\n");
for (i = 1; i <= nf; i++)
{
printf("%d\t\t%d\t\t", i, f[i]);
if (frag[i] != -1)
{
printf("%d\t\t%d\t\t%d\n", ff[i], b[ff[i]], frag[i]);
}
else
{
printf("Not allocated\tNot allocated\tNot allocated\n");
}
}
return 0;
}
INPUT-
OUTPUT-
3. First-Fit Allocation:
In the First-Fit allocation technique, the first available free block of memory that is
large enough to accommodate the process is selected. This method aims for
simplicity and efficiency by quickly finding a suitable block.
Algorithm:
2. Select the first block that can accommodate the process without fragmentation.
PROGRAM:-
#include <stdio.h>
#define MAX_BLOCKS 25
#define MAX_FILES 25
int main()
{
int frag[MAX_FILES], b[MAX_BLOCKS], f[MAX_FILES], bf[MAX_BLOCKS],
ff[MAX_FILES];
int i, j, nb, nf, temp;
printf("\nFile_no\tFile_size\tBlock_no\tBlock_size\tFragmentation\n");
for (i = 1; i <= nf; i++)
{
printf("%d\t\t%d\t\t", i, f[i]);
if (ff[i] != -1)
{
printf("%d\t\t%d\t\t%d\n", ff[i], b[ff[i]], frag[i]);
}
else
{
printf("Not allocated\tNot allocated\tNot allocated\n");
}
}
return 0;
}
INPUT-
OUTPUT-
Experiment No. – 6
There are two types of fragmentation in OS which are given as Internal fragmentation and
External fragmentation.
1. Internal Fragmentation:
Internal fragmentation happens when the memory is split into mounted-sized blocks.
Whenever a method is requested for the memory, the mounted-sized block is allotted
to the method. In the case where the memory allotted to the method is somewhat
larger than the memory requested, then the difference between allotted and requested
memory is called internal fragmentation. We fixed the sizes of the memory blocks,
which has caused this issue. If we use dynamic partitioning to allot space to the process,
this issue can be solved.
Internal Fragmentation
The above diagram clearly shows the internal fragmentation because the difference
between memory allocated and required space or memory is called Internal fragmentation.
2. External Fragmentation:
External fragmentation happens when there’ s a sufficient quantity of area within the
memory to satisfy the memory request of a method. However, the process’s memory
request cannot be fulfilled because the memory offered is in a non-contiguous manner.
Whether you apply a first-fit or best-fit memory allocation strategy it’ll cause external
fragmentation.
External Fragmentation
In the above diagram, we can see that, there is enough space (55 KB) to run a process-
07 (required 50 KB) but the memory (fragment) is not contiguous. Here, we use
compaction, paging, or segmentation to use the free space to run a process.
Difference between Internal fragmentation and External fragmentation
5. The difference between memory The unused spaces formed between non-
allocated and required space or contiguous memory fragments are too
memory is called Internal small to serve a new process, which is
fragmentation. called External fragmentation.
The free space list is crucial for efficient memory allocation because it allows the
operating system to quickly identify and allocate memory blocks to new processes or
files as needed. By maintaining this list, the system can avoid allocating memory that is
already in use, thereby minimizing fragmentation and maximizing the utilization of
available memory.
The free space list typically consists of information about each free memory block, such
as its starting address and size. This information enables the operating system to
determine whether a particular memory request can be satisfied and, if so, which
memory block should be allocated to fulfill the request.
In summary, the free space list of blocks from the system serves as a dynamic inventory
of available memory blocks, allowing the operating system to efficiently manage and
allocate memory resources to processes and files as required.
Program-
#include <stdio.h>
#define MAX_BLOCKS 25
#define MAX_FILES 25
int main() {
int frag[MAX_FILES], b[MAX_BLOCKS], f[MAX_FILES], bf[MAX_BLOCKS],
ff[MAX_FILES];
int freeBlocks[MAX_BLOCKS]; // List to store free block numbers
int numFreeBlocks = 0; // Number of free blocks
int i, j, nb, nf, temp;
printf("\nFile_no\tFile_size\tBlock_no\tBlock_size\tFragmentation\n");
for (i = 1; i <= nf; i++) {
printf("%d\t\t%d\t\t", i, f[i]);
if (ff[i] != -1) {
printf("%d\t\t%d\t\t%d\n", ff[i], b[ff[i]], frag[i]);
} else {
printf("Not allocated\tNot allocated\tNot allocated\n");
}
}
return 0;
}
OUTPUT-
2. List process file from the system
In memory management, the operating system needs to keep track of the processes or
files currently residing in the memory. This involves maintaining a list of processes or
files along with their respective memory locations and sizes. This list is known as the
"List process file from the system."
The purpose of this list is to provide the operating system with information about the
processes or files currently occupying memory so that it can efficiently manage memory
resources. By maintaining this list, the operating system can:
i. Determine the memory requirements of each process or file: The list contains
information about the size of each process or file, allowing the operating system
to allocate appropriate memory resources based on the requirements of each
process or file.
ii. Track the location of processes or files in memory: The list includes information
about the memory locations where each process or file is stored. This enables
the operating system to quickly access and manipulate the contents of memory
when necessary.
iii. Monitor memory usage and availability: By keeping track of the processes or
files in memory, the operating system can monitor memory usage and identify
opportunities to optimize memory allocation or reclaim memory from processes
that are no longer active.
Overall, the "List process file from the system" serves as a vital data structure for the
operating system, providing essential information about the processes or files currently
residing in memory and facilitating efficient memory management.
Program-
#include <stdio.h>
#define MAX_FILES 25
int main()
{
char fileNames[MAX_FILES][20]; // Array to store file names
int numFiles;
return 0;
}
OUTPUT-
Experiment No. – 7
Objective: -Implementation of compaction for the continually changing memory layout
and calculate total movement of data
Theory:
1. Identify Free Memory: Determine the location and size of free memory blocks in
the memory space.
2. Defragmentation: Move allocated memory blocks towards one end of the memory
space, compacting free memory into a contiguous block.
3. Update Memory Allocation Table: Update the memory allocation table to reflect
the new locations of memory blocks.
4. Calculate Total Movement: Calculate the total movement of data during
compaction, which represents the total amount of data that needs to be moved in
memory.
Program-
#include <stdio.h>
#include <stdlib.h>
struct MemoryBlock {
int startAddress;
int endAddress;
int processID;
};
int main() {
struct MemoryBlock blocks[MAX_BLOCKS];
int totalMovement = 0;
int numBlocks;
// Read memory layout (start address, end address, process ID) for
each block
printf("Enter the memory layout (start address, end address, process
ID) for each block:\n");
for (int i = 0; i < numBlocks; i++) {
printf("Block %d: ", i + 1);
scanf("%d %d %d", &blocks[i].startAddress, &blocks[i].endAddress,
&blocks[i].processID);
}
return 0;
}
OUTPUT-
Experiment No. – 8
Theory -
Resource Allocation Graph (RAG)
As Banker’s algorithm uses tables like allocation, request, and available to understand the
state of the system, a Resource Allocation Graph (RAG) can represent the same information
graphically. RAGs are helpful to visualize the state of the system in terms of processes and
resources, showing how many resources are available, allocated, and requested by each
process.
Components of RAG:
1. Vertices:
Process vertex: Represented by a circle, each process in the system is shown as
a process vertex.
Resource vertex: Represented by a rectangle, each resource type is shown as a
resource vertex.
2. Edges:
Request edge (P -> R): A directed edge from a process to a resource indicates
the process has requested that resource.
Assignment edge (R -> P): A directed edge from a resource to a process
indicates that the resource has been allocated to the process.
Advantages of RAG:
RAG allows for the visual detection of deadlocks.
It is more intuitive and easier to interpret for systems with fewer processes and
resources.
Deadlock Detection:
A cycle in the RAG indicates a deadlock. If there is a cycle, it means that a set of
processes are waiting for each other in a circular manner.
The diagram below shows a Resource Allocation Graph with two processes (P1, P2) and two
resources (R1, R2). It demonstrates a deadlock situation where:
P1 is holding R1 and waiting for R2.
P2 is holding R2 and waiting for R1.
Figure: Resource Allocation Graph
Program-
#include<stdio.h>
int main() {
int np, nr, temp, temp1;
return 0;
}
INPUT-
OUTPUT-
Explanation:
Processes P0, P1, and P2 are represented by rows/columns 0, 1, and 2, respectively.
Resources R0, R1, and R2 are represented by rows/columns 3, 4, and 5, respectively. The
matrix entries indicate which resources are held by and requested by each process.
Matrix Interpretation:
P0 -> R2 (request edge)
R0 -> P0 (assignment edge)
P1 -> R0 (request edge)
R1 -> P1 (assignment edge)
P2 -> R1 (request edge)
R2 -> P2 (assignment edge)
Theory –
Banker’s Algorithm
The Banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests
for safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding
whether allocation should be allowed to continue.
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available:
It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max:
It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in
a system.
Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type
Rj.
Allocation:
It is a 2-d array of size ‘n*m’ that defines the number of resources of each type
currently allocated to each process.
Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource
type Rj
Need :
It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
for its execution.
Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and Resource request algorithm.
Safety Algorithm:
1. Initialization:
Define two vectors, ‘Work’ and ‘Finish’, of lengths ‘m’ and ‘n’ respectively.
Initialize ‘Work’ to be equal to ‘Available’.
Set ‘Finish[i]’ to ‘false’ for all processes ‘i’.
2. Safety Check:
Iterate through the processes ‘i’.
Find a process ‘I’ such that:
‘Finish[i]’ is ‘false.’
‘Need[i]’ <= ‘Work’.
3. Resource Allocation:
Update the Work vector by adding the Allocation vector of process i.
Set Finish[i] to true.
Repeat step 2.
Resource-Request Algorithm:
1. Request Validation:
If Request[i] <= Need[i], proceed to step 2.
Otherwise, raise an error since the process has exceeded its maximum claim.
#include <stdio.h>
int main()
{
int n, m, i, j, k;
n = 5;
m = 3;
int alloc[5][3] = {{0, 1, 0}, {2, 0, 0}, {3, 0, 2}, {2, 1, 1}, {0, 0, 2}};
int max[5][3] = {{7, 5, 3}, {3, 2, 2}, {9, 0, 2}, {2, 2, 2}, {4, 3, 3}};
int avail[3] = {3, 3, 2};
int f[n], ans[n], ind = 0;
for (k = 0; k < n; k++)
f[k] = 0;
int need[n][m];
for (i = 0; i < n; i++)
{
for (j = 0; j < m; j++)
need[i][j] = max[i][j] - alloc[i][j];
}
int y = 0;
for (k = 0; k < 5; k++)
{
for (i = 0; i < n; i++)
{
if (f[i] == 0)
{
int flag = 0;
for (j = 0; j < m; j++)
{
if (need[i][j] > avail[j])
{
flag = 1;
break;
}
}
if (flag == 0)
{
ans[ind++] = i;
for (y = 0; y < m; y++)
avail[y] += alloc[i][y];
f[i] = 1;
}
}
}
}
printf("Following is the SAFE Sequence\n");
for (i = 0; i < n - 1; i++)
printf(" P%d ->", ans[i]);
printf(" P%d\n", ans[n - 1]);
return 0;
}
INPUT-
OUTPUT-
Explanation:
The program takes input in terms of allocation, maximum, and available resources.
It then executes the Banker’s Algorithm to determine the safe sequence of
processes.
Finally, it outputs the safe sequence of processes.
Experiment No. – 10
Objective: - Conversion of resource allocation graph (RAG) to wait for graph (WFG) for each
type of method used for storing graph.
Theory -
Resource Allocation Graph (RAG):
A Resource Allocation Graph (RAG) is a directed graph used in operating systems to
represent the allocation of resources to processes and the requests for additional resources.
The graph consists of vertices representing processes and resources, and edges
representing the allocation and request relationships.
Program-
#include <stdio.h>
int main() {
int np, nr, temp, temp1;
printf("Enter number of resources: ");
scanf("%d", &nr);
printf("Enter number of processes: ");
scanf("%d", &np);
int rag[MAX][MAX];
int i, j;
int wfg[MAX][MAX];
// Initialize the WFG matrix to zero
for (i = 0; i < np; i++) {
for (j = 0; j < np; j++) {
wfg[i][j] = 0;
}
}
return 0;
}
Input:
Output:
Experiment No. – 11
Objective: -Implement the solution for Bounded Buffer (producer-consumer)problem using
inter process communication techniques-Semaphores.
Theory
Semaphores:
Semaphores are synchronization tools used to solve critical section problems and to achieve
process synchronization in the multi-processing environment.
Problem Statement:
We have a fixed-size buffer shared between a producer and a consumer.
The producer generates data and puts it into the buffer if there is space.
The consumer removes data from the buffer if there is any data available.
Use semaphores to manage access to the buffer and ensure proper synchronization
between the producer and consumer.
Semaphores Used:
1. Mutex: Ensures mutual exclusion for accessing the buffer.
2. Empty: Counts the number of empty slots in the buffer.
3. Full: Counts the number of full slots in the buffer.
Program-
#include <pthread.h>
#include <semaphore.h>
#include <stdio.h>
#include <stdlib.h>
#define BUFFER_SIZE 5
int buffer[BUFFER_SIZE];
int in = 0, out = 0;
sem_t empty;
sem_t full;
pthread_mutex_t mutex;
buffer[in] = item;
printf("Producer produced %d\n", item);
in = (in + 1) % BUFFER_SIZE;
item = buffer[out];
printf("Consumer consumed %d\n", item);
out = (out + 1) % BUFFER_SIZE;
int main()
{
pthread_t prod, cons;
sem_init(&empty, 0, BUFFER_SIZE);
sem_init(&full, 0, 0);
pthread_mutex_init(&mutex, NULL);
pthread_join(prod, NULL);
pthread_join(cons, NULL);
sem_destroy(&empty);
sem_destroy(&full);
pthread_mutex_destroy(&mutex);
return 0;
}
Explanation
1. Buffer: The shared buffer is an array of fixed size (BUFFER_SIZE). Two indices, in and out,
are used to keep track of the next position to produce and consume items, respectively.
2. Semaphores:
empty: Initialized to the size of the buffer to represent the number of empty slots.
full: Initialized to 0 to represent the number of full slots.
mutex: A mutex semaphore to ensure mutual exclusion when accessing the buffer.
3. Producer Thread:
Generates an item.
Waits on empty semaphore to ensure there is space in the buffer.
Locks the mutex to enter the critical section and add the item to the buffer.
Updates the in index.
Unlocks the mutex.
Signals the full semaphore to indicate the buffer is not empty.
4. Consumer Thread:
Waits on full semaphore to ensure there is an item in the buffer.
Locks the mutex to enter the critical section and remove the item from the buffer.
Updates the out index.
Unlocks the mutex.
Signals the empty semaphore to indicate the buffer is not full.
5. Main Function:
Initializes semaphores and mutex.
Creates producer and consumer threads.
Waits for the threads to complete.
Destroys semaphores and mutex.
Output:
In this example output, the producer generates items and
places them in the buffer, and the consumer removes
items from the buffer. The semaphores ensure proper
synchronization, preventing buffer overflow and
underflow.
Experiment No. – 12
Objective: -. Implement the solutions for Readers-Writers problem using inter process
communication technique - Semaphore
Theory
Readers-Writers Problem:
The Readers-Writers problem is a classic synchronization problem that involves managing
access to a shared resource (like a database) between multiple readers and writers. The main
goal is to ensure that:
Multiple readers can read the shared resource simultaneously without interference.
Writers must have exclusive access to the shared resource, ensuring that no readers
or other writers are accessing it concurrently.
Semaphores:
Semaphores are used to manage synchronization in the Readers-Writers problem. The
following semaphores and variables are typically used:
mutex: Ensures mutual exclusion when updating the shared resource or shared
variables.
writeblock: Ensures mutual exclusion for writers to access the shared resource.
readcount: Keeps track of the number of readers currently accessing the shared
resource.
Problem Statement:
Multiple readers can read the shared resource simultaneously.
Only one writer can write to the shared resource at a time.
If a writer is writing to the shared resource, no reader should be able to read it.
Proper synchronization must be achieved using semaphores.
Semaphores Used:
1. mutex: Ensures mutual exclusion when readers update the read count.
2. writeblock: Ensures mutual exclusion for writers to access the shared resource.
3. readcount:: A shared variable to count the number of active readers.
#include <pthread.h>
#include <semaphore.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
return NULL;
}
return NULL;
}
int main()
{
int i;
pthread_t rtid[5], wtid[5];
sem_init(&mutex, 0, 1);
sem_init(&writeblock, 0, 1);
for (i = 0; i < 5; i++)
{
pthread_create(&wtid[i], NULL, writer, (void *) (intptr_t) i);
pthread_create(&rtid[i], NULL, reader, (void *) (intptr_t) i);
}
sem_destroy(&mutex);
sem_destroy(&writeblock);
return 0;
}
Explanation
1. Initialization of Semaphores:
mutex: A binary semaphore (mutex) to ensure mutual exclusion when readers update
the readcount.
writeblock: A binary semaphore to ensure mutual exclusion for writers.
2. Reader Thread:
Increments the readcount and locks the writeblock semaphore if it is the first reader.
Reads the data and prints it.
Decrements the readcount and releases the writeblock semaphore if it is the last reader.
3. Writer Function:
Locks the writeblock semaphore to get exclusive access to the data.
Writes the data and prints it.
Releases the writeblock semaphore.