Operating Systems Notes
Operating Systems Notes
[R17A0513 ]
LECTURE NOTES
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
Objectives:
To understand main components of OS and their working
To study the operations performed by OS as a resource manager
To understand the different scheduling policies of OS
To understand the different memory management techniques
To understand process concurrency and synchronization
To understand the concepts of input/ output, storage and file management
To study different OS and compare their features.
UNIT - II: Process and CPU Scheduling - Process concepts - The Process, Process State,
Process Control Block, Threads, Process Scheduling - Scheduling Queues, Schedulers, Context
Switch, Preemptive Scheduling, Dispatcher, Scheduling Criteria, Scheduling algorithms,
Multiple-Processor Scheduling, Real-Time Scheduling, Thread scheduling, Case studies: Linux,
Windows. Process Coordination - Process Synchronization, The Critical section Problem,
Peterson's solution, Synchronization Hardware, Semaphores, and Classic Problems of
Synchronization, Monitors, Case Studies: Linux, Windows.
UNIT - III: Memory Management and Virtual Memory - Logical & physical Address Space,
Swapping, Contiguous Allocation, Paging, Structure of Page Table. Segmentation, Segmentation
with Paging, Virtual Memory, Demand Paging, Performance of Demanding Paging, Page
Replacement - Page Replacement Algorithms, Allocation of Frames, Thrashing.
UNIT - IV: File System Interface - The Concept of a File, Access methods, Directory Structure,
File System Mounting, File Sharing, Protection, File System Implementation - File System
Structure, File System Implementation, Allocation methods, Free-space Management, Directory
Implementation, Efficiency and Performance. Mass Storage Structure - Overview of Mass
Storage Structure, Disk Structure, Disk Attachment, Disk Scheduling, Disk Management, Swap
space Management.
REFERENCES BOOKS:
1. Modern Operating Systems, Andrew S Tanenbaum 3rd Edition PHI.
2. Operating Systems A concept - based Approach, 2nd Edition, D. M. Dhamdhere, TMH.
3. Principles of Operating Systems, B. L. Stuart, Cengage learning, India Edition.
4. Operating Systems, A. S. Godbole, 2nd Edition, TMH
5. An Introduction to Operating Systems, P.C.P. Bhatt, PHI.
6. Operating Systems, S, Haldar and A. A. Arvind, Pearson Education.
7. Operating Systems, R. Elmasri, A. G. Carrick and D. Levine, Mc Graw Hill.
8. Operating Systems in depth, T. W. Doeppner, Wiley.
Outcomes:
Apply optimization techniques for the improvement of system performance.
Ability to understand the synchronous and asynchronous communication
mechanisms in their respective OS.
Learn about minimization of turnaround time, waiting time and response time and
also maximization of throughput with keeping CPU as busy as possible.
Ability to compare the different OS
INDEX
UNIT
TOPIC PAGE NO
NO
Operating System Introduction 1-10
UNIT-I
Operating System Introduction: Operating Systems Objectives and functions, Computer System
Architecture, OS Structure, OS Operations, Evolution of Operating Systems - Simple Batch, Multi
programmed, time shared, Personal Computer, Parallel, Distributed Systems, Real-Time Systems, Special -
Purpose Systems, Operating System services, user OS Interface, System Calls, Types of System Calls,
System Programs, Operating System Design and Implementation, OS Structure, Virtual machines
1
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
2
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
1. Booting
Booting is a process of starting the computer operating system starts the computer to work.
It checks the computer and makes it ready to work.
2. Memory Management
It is also an important function of operating system. The memory cannot be managed
without operating system. Different programs and data execute in memory at one time. if
there is no operating system, the programs may mix with each other. The system will not
work properly.
3. Loading and Execution
A program is loaded in the memory before it can be executed. Operating system provides
the facility to load programs in memory easily and then execute it.
4. Data security
Data is an important part of computer system. The operating system protects the data stored on
the computer from illegal use, modification or deletion.
5. Disk Management
Operating system manages the disk space. It manages the stored files and folders in a proper
way.
6. Process Management
CPU can perform one task at one time. if there are many tasks, operating system decides which
task should get the CPU.
7. Device Controlling
operating system also controls all devices attached to computer. The hardware devices
are controlled with the help of small software called device drivers..
8. Providing interface
It is used in order that user interface acts with a computer mutually. User interface controls
how you input data and instruction and how information is displayed on screen. The operating
system offers two types of the interface to the user:
1. Graphical-line interface: It interacts with of visual environment to communicate
with the computer. It uses windows, icons, menus and other graphical objects to
issuescommands.
2. Command-lineinterface:itprovidesaninterfacetocommunicatewiththecomputerbytyping
commands.
3
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
1. Single-processor system
2. Multiprocessor system
3. Clustered Systems:
1. Single-Processor Systems:
Some computers use only one processor such as microcomputers (or personal computers PCs).
On a single-processor system, there is only one CPU that performs all the activities in the
computer system. However, most of these systems have other special purpose processors, such
as I/O processors that move data quickly among different components of the computers. These
processors execute only a limited system programs and do not run the user program. Sometimes
they are managed by the operating system. Similarly, PCs contain a special purpose
microprocessor in the keyboard, which converts the keystrokes into computer codes to be sent to
the CPU. The use of special purpose microprocessors is common in microcomputer. But it does
not mean that this system is multiprocessor. A system that has only one general-purpose CPU,
is considered as single- processor system.
4
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
2. Multiprocessor Systems:
In multiprocessor system, two or more processors work together. In this system, multiple programs
(more than one program) are executed on different processors at the same time. This type of
processing is known as multiprocessing. Some operating systems have features of multiprocessing.
UNIX is an example of multiprocessing operating system. Some versions of Microsoft Windows
also support multiprocessing.
The multiprocessing system, in which each processor is assigned a specific task, is known as
Asymmetric Multiprocessing System. For example, one processor is dedicated for handling
user's requests, one processor is dedicated for running application program, and one processor
is dedicated for running image processing and so on. In this system, one processor works as
master processor, while other processors work as slave processors. The master processor
controls the operations of system. It also schedules and distributes tasks among the slave
processors. The slave processors perform the predefined tasks.
The multiprocessing system, in which multiple processors work together on the same task, is
known as Symmetric Multiprocessing System. In this system, each processor can perform all
types of tasks. All processors are treated equally and no master-slave relationship exists
between the processors.
For example, different processors in the system can communicate with each other. Similarly, an
I/O can be processed on any processor. However, I/O must be controlled to ensure that the data
reaches the appropriate processor. Because all the processors share the same memory, so the
input data given to the processors and their results must be separately controlled. Today all
modern operating systems including Windows and Linux provide support for SMP.
It must be noted that in the same computer system, the asymmetric multiprocessing and
symmetric multiprocessing technique can be used through different operating systems.
5
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
A Dual-Core Design
3. Clustered Systems:
Clustered system is another form of multiprocessor system. This system also contains multiple
processors but it differs from multiprocessor system. The clustered system consists of two or
more individual systems that are coupled together. In clustered system, individual systems (or
clustered computers) share the same storage and are linked together ,via Local Area Network
(LAN).
A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of
the other nodes over the LAN. If the monitored machine fails due to some technical fault (or
due to other reason), the monitoring machine can take ownership of its storage. The
monitoring machine can also restart the applications that were running on the failed machine.
The users of the applications see only an interruption of service.
6
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
7
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
2) Multitasking
8
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Operating-system Operations
1) Dual-Mode Operation·
In order to ensure the proper execution of the operating system, we must be able to distinguish
between the execution of operating-system code and user defined code. The approach taken by
most computer systems is to provide hardware support that allows us to differentiate among
various modes of execution.
At the very least we need two separate modes of operation.user mode and kernel mode.
A bit, called the mode bit is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1).with the mode bit we are able to distinguish between a task that is
executed on behalf of the operating system and one that is executed on behalf of the user, When
the
computer system is executing on behalf of a user application, the system is in user mode.
However,when a user application requests a service from the operating system (via a.. system
call), it must transition from user to kernel mode to fulfill the request.
At system boot time, the hardware starts in kernel mode. The operating system is then loaded
and starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware
switches from user mode to kernel mode (that is, changes the state of the mode bit to 0). Thus,
whenever the operating system gains control of the computer, it is in kernel mode. The system
always switches to user mode (by setting the mode bit to 1) before passing control to a user
program.
The dual mode of operation provides us with the means for protecting the operating system
from errant users-and errant users from one another. We accomplish this protection by
9
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
designating some of the machine instructions that may cause harm as privileged instructions.
the hardware allows privileged instructions to be executed only in kernel mode. If an attempt is
made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system. The instruction to
switch to kernel mode is an example of a privileged instruction. Some other examples include
I/0 control timer management and interrupt management.
10
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
11
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
12
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
13
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
14
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Personal-Computer Systems(PCs)
A personal computer (PC) is a small, relatively inexpensive computer designed for an
individual user. In price, personal computers range anywhere from a few hundred dollars to
thousands of dollars. All are based on the microprocessor technology that enables
manufacturers to put an entire CPU on one chip.
At home, the most popular use for personal computers is for playing games. Businesses
use personal computers for word processing, accounting, desktop publishing, and for
running spreadsheet and database management applications.
15
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
a) Real-Time EmbeddedSystems
These devices are found everywhere, from car engines and manufacturing robots to DVDs
and microwave ovens. They tend to have very specific tasks.
They have little or no user interface, preferring to spend their time monitoring and
managing hardware devices, such as automobile engines and robotic arms.
b) Multimedia Systems
Most operating systems are designed to handle conventional data such as text files, programs,
word-processing documents, and spreadsheets. However, a recent trend in technology is the
incorporation of multimedia data into computer systems. Multimedia data consist of audio
and video files as well as conventional files. These data differ from conventional data in that
multimedia data-such as frames of video-must be delivered (streamed) according to certain
time restrictions (for example, 30 frames per second). Multimedia describes a wide range of
applications in popular use today. These include audio files such as MP3, DVD movies,
video conferencing, and short video clips of movie previews or news stories downloaded
over the Internet. Multimedia applications may also include live webcasts (broadcasting over
the World Wide Web)
16
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
17
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Operating System Services
One set of operating-system services provides functions that are helpful to the user
Communications – Processes may exchange information, on the same computer or between computers
over a network Communications may be via shared memory or through message passing (packets moved
by the OS)
Error detection – OS needs to be constantly aware of possible errors May occur in the CPU and
memory hardware, in I/O devices, in user program For each type of error, OS should take the appropriate
action to ensure correct and consistent computing Debugging facilities can greatly enhance the user’s
and programmer’s abilities to efficiently use the system
Another set of OS functions exists for ensuring the efficient operation of the system itself via resource
sharing
Resource allocation - When multiple users or multiple jobs running concurrently, resources must
be allocated to each of them
Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have special
allocation code, others (such as I/O devices) may have general request and release code
Accounting - To keep track of which users use how much and what kinds of computer resources
Protection and security - The owners of information stored in a multiuser or networked computer
system may want to control use of that information, concurrent processes should not interfere with each
other
Protection involves ensuring that all access to system resources is controlled
Security of the system from outsiders requires user authentication, extends to defending external I/O
devices from invalid access attempts
If a system is to be protected and secure, precautions must be instituted throughout it. A chain is only as
strong as its weakest link.
User Operating System Interface - CLI
Command Line Interface (CLI) or command interpreter allows direct command entry
Sometimes implemented in kernel, sometimes by systems program
Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
18
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
19
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
20
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Information maintenance
System calls exist purely for transferring information between the user
program and OS. It can return information about the system, such as the number of current users,
the version number of the operating system, the amount of free memory or disk space and so on.
o get time or date, set time or date
o get system data, set system data
o get and set process, file, or device attributes
Communications
Two common models of communication
Message-passing model, information is exchanged through an inter process-
communication facility provided by the OS.
Shared-memory model, processes use map memory system calls to gain access to regions of
memory owned by other processes.
o create, delete communication connection
o send, receive messages
o transfer status information
o attach and detach remote devices
22
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
MS-DOS execution
System Programs
System programs provide a convenient environment for program development and execution. The can
be divided into:
File manipulation
Status information
File modification
Programming language support
Program loading and execution
Communications
Application programs
Most users’ view of the operation system is defined by system programs, not the actual
system calls provide a convenient environment for program development and execution
Some of them are simply user interfaces to system calls; others are considerably more complex
File management - Create, delete, copy, rename, print, dump, list, and generally manipulate files and
directories
Status information
Some ask the system for info - date, time, amount of available memory, disk space, number of users
Others provide detailed performance, logging, and debugging information
Typically, these programs format and print the output to the terminal or other output devices
Some systems implement a registry - used to store and retrieve configuration information
File modification
Text editors to create and modify files
Special commands to search contents of files or perform transformations of the text
Programming-language support - Compilers, assemblers, debuggers and interpreters sometimes
provided
Program loading and execution- Absolute loaders, relocatable loaders, linkage editors, and overlay-
loaders, debugging systems for higher-level and machine language
Communications - Provide the mechanism for creating virtual connections among processes, users, and
computer systems
24
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Allow users to send messages to one another’s screens, browse web pages, send electronic-mail
messages, log in remotely, transfer files from one machine to another
25
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
The operating system is divided into a number of layers (levels), each built on top of lower layers. The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers
Traditional UNIX System Structure
UNIX
UNIX – limited by hardware functionality, the original UNIX operating system had limited structuring.
The UNIX OS consists of two separable parts
Systems programs
The kernel
Consists of everything below the system-call interface and above the physical hardware
Provides the file system, CPU scheduling, memory management, and other operating-system
functions; a large number of functions for one level
26
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Micro kernel System Structure
Moves as much from the kernel into “user” space
Communication takes place between user modules using message passing
Benefits:
Easier to extend a microkernel
Easier to port the operating system to new architectures More reliable (less code
is running in kernel mode)
More secure
Detriments:
Performance overhead of user space to kernel space communication
MacOS X Structure
Modules
27
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Virtual Machines
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the
operating system kernel as though they were all hardware
A virtual machine provides an interface identical to the underlying bare hardware
The operating system host creates the illusion that a process has its own processor and (virtual memory)
Each guest provided with a (virtual) copy of underlying computer
Virtual Machines History and Benefits
First appeared commercially in IBM mainframes in 1972
Fundamentally, multiple execution environments (different operating systems) can share the same hardware
Protect from each other
Some sharing of file can be permitted, controlled
Commutate with each other, other physical systems via networking
Useful for development, testing
Consolidation of many low-resource use systems onto fewer busier systems
“Open Virtual Machine Format”, standard format of virtual machines, allows a VM to run within many
different virtual machine (host) platforms
Para-virtualization
Presents guest with system similar but not identical to hardware
Guest must be modified to run on par virtualized hardware
Guest can be an OS, or in the case of Solaris 10 applications running in containers
Solaris 10 with Two Containers
28
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
VMware Architecture
Operating-System Debugging
29
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
UNIT-2
Process and CPU Scheduling : Process concepts- the process, process states, process control block,
Threads, process scheduling- Scheduling queues, schedulers, context switch, preemptive scheduling,
dispatcher, scheduling criteria, scheduling algorithms, multiprocessor scheduling, real time scheduling,
Thread scheduling, case studies Linux, Windows.
Process Coordination- Process synchronization, the critical- section problem, Peterson’s Solution,
synchronization Hardware, semaphores, classic problems of synchronization, monitors, Case studies
Linux, Windows.
Process
Process Program
Process is a dynamic object Program is a static object
Process States
When a process executed, it changes the state, generally the state of process is determined by the
current activity of the process. Each process may be in one of the following states:
30
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
b) Ready->Running : OS selects one of the Jobs from ready Queue and move themfrom
ready to Running.
d) Running->Ready : When the time slot of the processor expired (or) If the
processorreceivedanyinterrupt signal, the OS shifted Running -> ReadyState.
e) Running -> Waiting : A process is put into the waiting state, if the process need an event occur
(or) an I/O Devicerequire.
It is also called Task Control Block. It contains many pieces of information associated with a specific Process.
Process State
Program Counter
CPU Registers
Accounting Information
Threads:
A process is divide into number of light weight process, each light weight process is said to be a
Thread. The Thread has a program counter (Keeps track of which instruction to execute next), registers
(holds its current working variables), stack (execution History).
Thread States:
Process Thread
Process takes more time to create. Thread takes less time to create.
it takes more time to complete execution & Less time to terminate.
terminate.
Execution is very slow. Execution is very fast.
It takes more time to switch b/w two processes. It takes less time to switch b/w two threads.
Communication b/w two processes is difficult . Communication b/w two threads is easy.
Process can’t share the same memory area. Threads can share same memory area.
System calls are requested to communicate each System calls are not required.
other.
Process is loosely coupled. Threads are tightly coupled.
It requires more resources to execute. Requires few resources to execute.
Multithreading
A process is divided into number of smaller tasks each task is called a Thread. Number of Threads with
in a Process execute at a time is called Multithreading.
If a program, is multithreaded, even when some portion of it is blocked, the whole program is
not blocked.The rest of the program continues working If multiple CPU’s are available.
Multithreading gives best performance.If we have only a single thread, number of CPU’s
available, No performance benefits achieved.
32
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
1) User Threads : Thread creation, scheduling, management happen in user space by Thread
Library. user threads are faster to create and manage. If a user thread performs a system call, which
blocks it, all the other threads in that process one also automatically blocked, whole process isblocked.
Advantages
Disadvantages
2) Kernal Threads: kernel creates, schedules, manages these threads .these threads are slower,
manage. If one thread in a process blocked, over all process need not be blocked.
33
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
Kernel routines themselves can multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the userthreads.
Transfer of control from one thread to another within same process requires a mode switch to
the Kernel.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is
a good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
Many to many relationship.
Many to one relationship.
One to one relationship.
In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The
number of Kernel threads may be specific to either a particular application or a particular machine.
Following diagram shows the many to many model. In this model, developers can create as many user
threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.
34
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Many to one model maps many user level threads to one Kernel level thread. Thread management is done
in user space. When thread makes a blocking system call, the entire process will be blocks. Only one
thread can access the Kernel at a time,so multiple threads are unable to run in parallel on multiprocessors.
If the user level thread libraries are implemented in the operating system in such a way that system does
not support them then Kernel threads use the many to one relationship modes.
There is one to one relationship of user level thread to the kernel level thread.This model provides more
concurrency than the many to one model. It also another thread to run when a thread makes a blocking
system call. It support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
35
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
PROCESS SCHEDULING:
CPU is always busy in Multiprogramming. Because CPU switches from one job to another job. But in
simple computers CPU sit idle until the I/O request granted.
scheduling is a important OS function. All resources are scheduled before use.(cpu, memory,
devices…..)
SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows as
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to complete is
said to bein device queue.
(or)
The processes waiting for a particular I/O device is called device queue.
36
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Scheduler duties :
Types of schedulers
Context Switch: Assume, main memory contains more than one process. If cpu is executing a process, if time
expires or if a high priority process enters into main memory, then the scheduler saves information about current
process in the PCB and switches to execute the another process. The concept of moving CPU by scheduler from one
process to other process is known as context switch.
Non-Preemptive Scheduling: CPU is assigned to one process, CPU do not release until the completition of that
process. The CPU will assigned to some other process only after the previous process has finished.
37
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Preemptive scheduling: here CPU can release the processes even in the middle of the execution. CPU
received a signal from process p2. OS compares the priorities of p1 ,p2. If p1>p2, CPU continues the
execution of p1. If p1<p2 CPU preempt p1 and assigned to p2.
Dispatcher: The main job of dispatcher is switching the cpu from one process to another process.
Dispatcher connects the cpu to the process selected by the short term scheduler.
Dispatcher latency: The time it takes by the dispatcher to stop one process and start another process is
known as dispatcher latency. If the dispatcher latency is increasing, then the degree of
multiprogramming decreases.
SCHEDULING CRITERIA;
1. Throughput: how many jobs are completed by the cpu with in a timeperiod.
2. Turn around time : The time interval between the submission of the process and
time of the completion is turn aroundtime.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for I/O.
3. Waiting time: The time spent by the process to wait for cpu to beallocated.
4. Response time: Time duration between the submission and firstresponse.
5. Cpu Utilization: CPU is costly device, it must be kept as busy aspossible.
Eg: CPU efficiency is 90% means it is busy for 90 units, 10 units idle.
.CPU SCHEDULINGALGORITHMS:
1. First come First served scheduling: (FCFS): The process that request the CPU first is
holds the cpu first. If a process request the cpu then it is loaded into the ready queue, connect CPU to
that process.
Consider the following set of processes that arrive at time 0, the length of the cpu burst time given in
milli seconds.
burst time is the time, required the cpu to execute that job, it is in milli seconds.
38
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
for p5=55-0=55
0 = 55
39
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
P2 6 2
P3 4 4
P4 5 6
P5 2 8
3
Turn Around Time for P2 => 1+6 = 7
Turn Around Time for P3 => 5+4 = 9
Turn Around Time for P4 => 7+ 5=
12 Turn Around Time for P5 => 2+
10=12
Average Turn Around Time => ( 3+7+9+12+12 )/5 =>43/5 = 8.50 ms.
Average Response Time :
Formula : Response Time = First Response - Arrival Time
Response Time of P1 = 0
Response Time of P2 => 3-2 = 1
Response Time of P3 => 9-4 = 5
Response Time of P4 => 13-6 =
7 Response Time of P5 => 18-8
40
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
=10
Average Response Time => ( 0+1+5+7+10 )/5 => 23/5 = 4.6 ms
Advantages: Easy to Implement, Simple.
Which process having the smallest CPU burst time, CPU is assigned to that process . If two
process having the same CPU burst time, FCFS is used.
P1 5
P2 24
P3 16
P4 10
P5 3
P5 having the least CPU burst time ( 3ms ). CPU assigned to that ( P5 ). After completion of P5 short
term scheduler search for nest ( P1 ).......
Short term scheduler always chooses the process that has term shortest remaining time. When a new
process joins the ready queue , short term scheduler compare the remaining time of executing process
and new process. If the new process has the least CPU burst time, The scheduler selects that job and
connect to CPU. Otherwise continue the old process.
P2 6 2
P3 4 4
P4 5 6
P5 2 8
42
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
P1 arrives at time 0, P1 executing First , P2 arrives at time 2. Compare P1 remaining time and P2 ( 3-2 =
1) and 6. So, continue P1 after P1, executing P2, at time 4, P3 arrives, compare P2 remaining time (6-1=5
) and 4 ( 4<5 ) .So, executing P3 at time 6, P4 arrives. Compare P3 remaining time and P4 ( 4-2=2 ) and
5 (2<5 ). So, continue P3 , after P3, ready queue consisting P5 is the least out of three. So execute P5,
next P2, P4.
It is designed especially for time sharing systems. Here CPU switches between the processes. When the
time quantum expired, the CPU switched to another job. A small unit of time, called a time quantum or
time slice. A time quantum is generally from 10 to 100 ms. The time quantum is generally depending
on OS. Here ready queue is a circular queue. CPU scheduler picks the first process from ready queue,
sets timer to interrupt after one time quantum and dispatches the process.
P1 30
P2 6
P3 8
43
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
5) PRIORITY SCHEDULING :
P2 12 4
P3 1 5
P4 3 1
P5 4 3
P4 has the highest priority. Allocate the CPU to process P4 first next P1, P5, P2, P3.
disadvantage : Starvation
Starvation means only high priority process are executing, but low priority process are waiting for
the CPU for the longest period of the time.
6) Multilevel QueueScheduling:
ready queue is partitioned into number of ready queues .Each ready queue is capable to load same type
of jobs. Each ready queue has its own scheduling algorithm. ready queue is partitioned into 4 ready
queues. one ready queue for system processes, one ready queue for background processes, one ready
queue for foreground processes, another ready queue for student processes. No student process run
unless system, foreground, background process were all empty.
Each queue gets a certain portion of the cpu time , which it can then schedule among its various processes.
System Process
Interactive Process
Batch Process
Student Process
45
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
7) Multilevel FeedbackQueues:
This algorithm allows a process to move between the queues. if a process uses too much cpu time,it will
be moved to next low priority queue.a process that waits too long in a lower priority queue may be
moved to a higher priority queue.It prevents starvation.
E.G: It has 3 queues(Q0,Q1,Q2),Scheduler first executes all process in Q0.only when Q0 is empty will
it execute Q1.The process in Q2 will only be executed ,if Q0,Q1 are empty. high priority queue is
Q0,low priority queue is Q2.
A process entering the ready queue is put in Q0.A process in Q0 is given a time quantum of 8 ms.if it
does not finish with in this time, moved to tail of Q1.if Q0 is empty, the process at the head of the
queue
Q1 is given a time quantum of 16 ms.if it does not complete,put into Q2.Q2 processes are run on FCFS basis.but
these processes are run only when Q0,Q1 are empty.
This algorithm gives highest priorities to any process with a CPU burst of 8 ms (or) less.process that
need more than 8 but less than 24 ms are also served quickly.long processes automatically sink to Q2,are
served in FCFS.
Thread Scheduling
Kernel-level threads scheduled by the Operating system. user level threads managed by a thread
library.To run a CPU, user level threads must ultimately be mapped to an associated kernel level
thread. Contention scope:
Defines whether a thread is to contend for processing resources relative to other threads with in the
same process (or) relative to other threads within the same system.
a) Process contentionscope:
Competition for the CPU takes palce among threads belonging to the same process.
b) System contentionscope:
Completion for the CPU takes place among all threads in the system.
Multiple – processorscheduling:
When multiple processes are available,then the scheduling gets more complicated ,because there is
more than one CPU which must be kept busy and in effective use at all times.
Load sharing resolves around balancing the load between multiple processors.multi processor systems
may be heterogeneous(It contains different kinds of CPU’s) ( or ) Homogeneous(all the same kind of
CPU).
46
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
1) Approaches to multiple-processor
scheduling a)Asymmetric multiprocessing
One processor is the master,controlling all activities and running all kernel code,while the other runs
only user code.
b)Symmetric multiprocessing:
Each processor schedules its own job.each processor may have its own private queue of ready processes.
2) Processor Affinity
successive memory accesses by the process are often satisfied in cache memory.what happens if the
process migrates to another processor.the contents of cache memory must be invalidated for the first
processor,cache for the second processor must be repopulated.most Symmetric multi processor systems
try to avoid migration of processes from one processor to another processor,keep a process running on
the same processor.this is called processor affinity.
a) Soft affinity:
Soft affinity occurs when the system attempts to keep processes on the same processor but makes no
guarantees.
b) Hard affinity:
Process specifies that it is not to be moved between processors.
3) Load balancing:
One processor wont be sitting idle while another is overloaded.
Balancing can be achived through push migration or pull migration.
Push migration:
Push migration involves a separate process that runs periodically(e.g every 200 ms) and moves processes
from heavily loaded processors onto less loaded processors.
Pull migration:
Pull migration involves idle processors taking processes from the ready queues of the other processors.
4) Multicore processors:
A multi-core processor is a single computing component with 2 or more independent actual
processing units(cores),which are the units that read and execute program instructions.
When a processor acceses memory,it spends some time waiting for the data to become available.this is
known as memory stall.To remedy this,design multithreaded processor cores in which 2 or more
hardware threads are assigned to each core.If one thread stall while waiting for memory,the core can
switch to another thread.
From operating system perspective,each hardware thread appears as a logical processor that is
availableto run a software thread.thus on a dual threaded,dual core system,four logical processors are
presented to the operatingsystem.
have one host OS,many guest OS.The host OS creates and manages the virtual machines,each virtual
machine has a guest OS installed and applications running within that guest.
Process coordination:
Process synchronization refers to the idea that multiple processes are to join up orhandshake at a
certain point, in order to reach an agreement or commit to a certain sequence of action. Coordination of
simultaneous processes to complete a task is known as process synchronization.
The critical section problem
Consider a system ,assume that it consisting of n processes.Each process having a segment of code.This
segment of code is said to be critical section.
E.G: Railway Reservation System.
Two persons from different stations want to reserve their tickets,the train number,destination is
common,the two persons try to get the reservation at the same time. unfortunately, the available berths
are only one,both are trying for that berth.
It is also called the critical section problem.solution is when one process is executing in its critical
section,,no other process is to be allowed to execute in its critical section.
The critical section problem is to design a protocol that the processes can use to cooperate.Each process
must request permission to enter its critical section.The section of code implementing this request is the
entry section.The critical section may be followed by an exit section.The remaining code is the
remainder section.
48
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
A solution to the critical section problem must satisfy the following 3 requirements:
1.mutual exclusion:
Only one process can execute their critical section at any time.
2. Progress:
When no process is executing a critical section for a data,one of the processes wishing to enter a
critical section for data will be granted entry.
3. Bounded wait:
No process should wait for a resource for infinite amount of time.
Critical section:
The portion in any program that acceses a shared resource is called as critical section (or) critical region.
Peterson’s solution:
Peterson solution is one of the solutions to critical section problem involving two processes.This
solution states that when one process is executing its critical section then the other process executes the
rest of the code and vice versa.
Peterson solution requires two shared data items:
1) turn: indicates whose turn it is to enter into the critical
section. If turn == i ,then process i is allowed into their
criticalsection.
2) flag: indicates when a process wants to enter into critical section.when process i wants to
enter their critical section,it sets flag[i] to true.
do {
flag[i] =
TRUE; turn =
j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization hardware
In a uniprocessor multiprogrammed system, mutual exclusion can be obtained by disabling the
interrupts before the process enters its critical section and enabling them after it has exited the critical
section.
Disable interrupts
Critical section
Enable interrupts
Once a process is in critical section it cannot be interrupted. This solution cannot be used in
multiprocessor environment. since processes run independently on different processors.
In multiprocessor systems, Testandset instruction is provided,it completes execution without
49
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
interruption. Each process when entering their critical section must set lock,to prevent other processes
from entering their critical sections simultaneously and must release the lock when exiting their critical
sections.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
A process wants to enter critical section and value of lock is false then testandset returns false and the
value of lock becomes true. thus for other processes wanting to enter their critical sections testandset
returns true and the processes do busy waiting until the process exits critical section and sets the value
of lock to false.
• Definition:
boolean TestAndSet(boolean&lock){
boolean temp=lock;
Lock=true;
return temp;
}
Algorithm for TestAndSet
do{
while testandset(&lock)
//do nothing
//critical section
lock=false
remainder section
}while(TRUE);
remainder section
}while(1);
lock is global variable initialized to false.each process has a local variable key. A process wants to enter
critical section,since the value of lock is false and key is true.
lock=false
key=true
after swap instruction,
lock=true key=false
now key=false becomes true,process exits repeat-until,and enter into critical section.
When process is in critical section (lock=true),so other processes wanting to enter critical section will
have
lock=true key=true
Hence they will do busy waiting in repeat-until loop until the process exits critical section and sets the
value of lock to false.
Semaphores
A semaphore is an integer variable.semaphore accesses only through two operations.
1) wait: wait operation decrements the count by1.
If the result value is negative,the process executing the wait operation is blocked.
2) signaloperation:
Signal operation increments by 1,if the value is not positive then one of the process blocked in wait
operation unblocked.
wait (S) {
while S <= 0 ; // no-op
S--;
}
signal (S)
{
S++;
}
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
51
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
First process that executes wait operation will be immediately granted sem.count to 0.
If some other process wants critical section and executes wait() then it is blocked,since value becomes -
1. If the process exits critical section it executes signal().sem.count is incremented by 1.blocked process
is removed from queue and added to ready queue.
Problems:
1) Deadlock
Deadlock occurs when multiple processes are blocked.each waiting for a resource that can only be freed
by one of the other blocked processes.
2) Starvation
one or more processes gets blocked forever and never get a chance to take their turn in the critical
section.
3) Priority inversion
If low priority process is running ,medium priority processes are waiting for low priority process,high
priority processes are waiting for medium priority processes.this is called Priority inversion.
The two most common kinds of semaphores are counting semaphores and binary semaphores.
Counting semaphores represent multiple resources, while binary semaphores, as the name
implies, represents two possible states (generally 0 or 1; locked or unlocked).
Classic problems of synchronization
1) Bounded-buffer problem
Two processes share a common ,fixed –size buffer.
Producer puts information into the buffer, consumer takes it out.
The problem arise when the producer wants to put a new item in the buffer,but it is already full. The
solution is for the producer has to wait until the consumer has consumed atleast one buffer. similarly if
the consumer wants to remove an item from the buffer and sees that the buffer is empty,it goes to sleep
until the producer puts something in the buffer and wakes it up.
(mutex);
signal (full);
} while (TRUE);
A process wishing to modify the shared data must request the lock in write mode. multiple processes
are permitted to concurrently acquire a reader-writer lock in read mode. A reader writer lock in read
mode. but only one process may acquire the lock for writing as exclusive access is required for writers.
53
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
One simple solution is to represent each fork with a semaphore.a philosopher tries to grab a fork
by executing wait() operation on that semaphore.he releases his forks by executing the signal()
operation.This solution guarantees that no two neighbours are eating simultaneously.
Suppose all 5 philosophers become hungry simultaneously and each grabs his left fork,he will be
delayed forever.
54
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Several remedies:
1) Allow at most 4 philosophers to be sitting simultaneously at the table.
2) Allow a philosopher to pickup his fork only if both forks are available.
3) An odd philosopher picks up first his left fork and then right fork. an even philosopher picks up his right fork and then
his left fork.
MONITORS
The disadvantage of semaphore is that it is unstructured construct. Wait and signal operations can be scattered in a
program and hence debugging becomes difficult.
A monitor is an object that contains both the data and procedures needed to perform allocation of a shared resource. To
accomplish resource allocation using monitors, a process must call a monitor entry routine. Many processes may
want to enter the monitor at the same time. but only one process at a time is allowed to enter. Data inside a monitor
may be either global to all routines within the monitor (or) local to a specific routine. Monitor data is accessible only
within the monitor. There is no way for processes outside the monitor to access monitor data. This is a form of
information hiding.
If a process calls a monitor entry routine while no other processes are executing inside the monitor, the process
acquires a lock on the monitor and enters it. while a process is in the monitor, other processes may not enter the
monitor to acquire the resource. If a process calls a monitor entry routine while the other monitor is locked the monitor
makes the calling process wait outside the monitor until the lock on the monitor is released. The process that has the
resource will call a monitor entry routine to release the resource. This routine could free the resource and wait for
another requesting process to arrive monitor entry routine calls signal to allow one of the waiting processes to enter the
monitor and acquire the resource. Monitor gives high priority to waiting processes than to newly arriving ones.
Structure:
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedurePn (…) {……}
Initialization code (…) { … }
}
}
Processes can call procedures p1,p2,p3……They cannot access the local variables of the monitor
55
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Monitor provides condition variables along with two operations on them i.e. wait and signal.
wait(condition variable)
signal(condition variable)
Every condition variable has an associated queue.A process calling wait on a particular condition
variable is placed into the queue associated with that condition variable.A process calling signal on a
particular condition variable causes a process waiting on that condition variable to be removed from the
queue associated with it.
Solution to Producer consumer problem using monitors:
monitor producerconsumer
condition full,empty;
int count;
56
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
procedure insert(item)
{
if(count==MAX)
wait(full) ;
insert_item(item);
count=count+1;
if(count==1)
signal(empty);
}
procedure remove()
{
if(count==0)
wait(empty);
remove_item(item);
count=count-1;
if(count==MAX-1)
signal(full);
}
procedure producer()
{
producerconsumer.insert(item);
}
procedure consumer()
{
producerconsumer.remove();
}
57
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
A philosopher may pickup his forks only if both of them are available.A philosopher can eat only if his
two neighbours are not eating.some other philosopher can delay himself when he is hungry.
Diningphilosophers.Take_forks( ) : acquires forks ,which may block the process.
Eat noodles ( )
Diningphilosophers.put_forks( ): releases the forks.
Resuming processes within a monitor
If several processes are suspended on condion x and x.signal( ) is executed by some process. then
how do we determine which of the suspended processes should be resumed next ?
solution is FCFS(process that has been waiting the longest is resumed first).In many circumstances,
such simple technique is not adequate. alternate solution is to assign priorities and wake up the process
with the highest priority.
58
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
UNIT-3
Memory Management and Virtual Memory - Logical & physical Address Space, Swapping, Contiguous
Allocation, Paging, Structure of Page Table. Segmentation, Segmentation with Paging, Virtual Memory,
Demand Paging, Performance of Demanding Paging, Page Replacement - Page Replacement Algorithms,
Allocation of Frames, Thrashing
Logical And Physical Addresses
An address generated by the CPU is commonly refereed as Logical Address, whereas the
address seen by the memory unit, that is one loaded into the memory address register of the
memory is commonly refereed as the Physical Address. The compile time and load time
address binding generates the identical logical and physical addresses. However, the execution
time address binding scheme results in differing logical and physical addresses.
The set of all logical addresses generated by a program is known as Logical Address Space,
where as the set of all physical addresses corresponding to these logical addresses is
Physical Address Space. Now, the run time mapping from virtual address to physical address
is done by a hardware device known as Memory Management Unit. Here in the case of
mapping the base register is known as relocation register. The value in the relocation register is
added to the address generated by a user process at the time it is sent to memory .Let's
understand this situation with the help of example: If the base register contains the value
1000,then an attempt by the user to address location 0 is dynamically relocated to location
1000,an access to location 346 is mapped to location 1346.
Memory-Management Unit (MMU)
Hardware device that maps virtual to physical address
In MMU scheme, the value in the relocation register is added to every address generated by a user
process at the time it is sent to memory
The user program deals with logical addresses; it never sees the real physical addresses
The user program never sees the real physical address space, it always deals with
the Logical addresses. As we have two different type of addresses Logical address in the
range (0 to max) and Physical addresses in the range(R to R+max) where R is the value
59
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
of relocation register. The user generates only logical addresses and thinks that the process runs
in location to 0 to max. As it is clear from the above text that user program supplies only logical
addresses, these logical addresses must be mapped to physical address before they are used.
Base and Limit Registers
A pair of base and limit registers define the logical address space
Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile
code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution
from
one memory segment to another. Need hardware support for address maps (e.g., base and
limit registers)
60
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required implemented through program design
Dynamic Linking
Linking postponed until execution time
Small piece of code, stub, used to locate the appropriate memory-resident library
routine Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes’ memory
address Dynamic linking is particularly useful for libraries
System also known as shared libraries
61
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back into
memory for continued executionnBacking store – fast disk large enough to accommodate copies of all
memory images for all users; must provide direct access to these memory imagesnRoll out, roll in –
swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so
higher-priority process can be loaded and executednMajor part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory swappednModified versions of swapping are found
on many systems (i.e., UNIX, Linux, and Windows)
System maintains a ready queue of ready-to-run processes which have memory images on disk
Contiguous Allocation
62
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Multiple-partition allocation
Hole – block of available memory; holes of various size are scattered throughout
memory When a process arrives, it is allocated memory from a hole large enough to
accommodate it
Contiguous memory allocation is one of the efficient ways of allocating main memory to the
processes. The memory is divided into two partitions. One for the Operating System and
another for the user processes. Operating System is placed in low or high memory depending on
the interrupt vector placed. In contiguous memory allocation each process is contained in a
single contiguous section of memory.
Memory protection
Memory protection is required to protect Operating System from the user processes and user
processes from one another. A relocation register contains the value of the smallest physical
address for example say 100040. The limit register contains the range of logical address for
example say 74600. Each logical address must be less than limit register. If a logical address is
greater than the limit register, then there is an addressing error and it is trapped. The limit
register hence offers memory protection.
The MMU, that is, Memory Management Unit maps the logical address dynamically, that is at
run time, by adding the logical address to the value in relocation register. This added value is
the physical memory address which is sent to the memory.
The CPU scheduler selects a process for execution and a dispatcher loads the limit and
relocation registers with correct values. The advantage of relocation register is that it provides an
efficient way to allow the Operating System size to change dynamically.
Memory allocation
There are two methods namely, multiple partition method and a general fixed partition method.
In multiple partition method, the memory is divided into several fixed size partitions. One
process occupies each partition. This scheme is rarely used nowadays. Degree of
multiprogramming depends on the number of partitions. Degree of multiprogramming is the
number of programs that are in the main memory. The CPU is never left idle in
multiprogramming. This was used by IBM OS/360 called MFT. MFT stands for
Multiprogramming with a Fixed number of Tasks.
Generalization of fixed partition scheme is used in MVT. MVT stands for Multiprogramming
with a Variable number of Tasks. The Operating System keeps track of which parts of memory
are available and which is occupied. This is done with the help of a table that is maintained by
63
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
the Operating System. Initially the whole of the available memory is treated as one large block
of memory called a hole. The programs that enter a system are maintained in an input queue.
From the hole, blocks of main memory are allocated to the programs in the input queue. If the
hole is large, then it is split into two, and one half is allocated to the arriving process and the
other half is returned. As and when memory is allocated, a set of holes in scattered. If holes are
adjacent, they can be merged.
Now there comes a general dynamic storage allocation problem. The following are the
solutions to the dynamic storage allocation problem.
First fit: The first hole that is large enough is allocated. Searching for the holes
starts from the beginning of the set of holes or from where the previous first fit search ended.
Best fit: The smallest hole that is big enough to accommodate the incoming
process is allocated. If the available holes are ordered, then the searching can be reduced.
First and Best fits decrease time and storage utilization. First fit is generally faster.
Fragmentation
The disadvantage of contiguous memory allocation is fragmentation. There are two types
of fragmentation, namely, Internal fragmentation and External fragmentation.
Internal fragmentation
When memory is free internally, that is inside a process but it cannot be used, we call that
fragment as internal fragment. For example say a hole of size 18464 bytes is available. Let the
size of the process be 18462. If the hole is allocated to this process, then two bytes are left
which is not used. These two bytes which cannot be used forms the internal fragmentation. The
worst part of it is that the overhead to maintain these two bytes is more than two bytes.
External fragmentation
All the three dynamic storage allocation methods discussed above suffer external fragmentation.
When the total memory space that is got by adding the scattered holes is sufficient to satisfy a
request but it is not available contiguously, then this type of fragmentation is called external
fragmentation.
One more solution to external fragmentation is to have the logical address space and physical
64
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
address space to be non contiguous. Paging and Segmentation are popular non contiguous
allocation methods.
Paging
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard that's set up to
emulate the computer's RAM. Paging technique plays an important role in implementing virtual
memory.
Paging is a memory management technique in which process address space is broken into
blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes).
The size of the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called
frames and the size of a frame is kept the same as that of a page to have optimum utilization of
the main memory and to avoid external fragmentation.
Paging Hardware
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
65
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Paging Example
Free Frames
66
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
When the system allocates a frame to any page, it translates this logical address into a physical
address and create entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available
memory frames. Suppose you have a program of 8Kb but your memory can accommodate only
5Kb at a given point in time, then the paging concept will come into picture. When a computer
runs out of RAM, the operating system (OS) will move idle or unwanted pages of memory to
secondary memory to free up RAM for other processes and brings them back when needed by
the program.
This process continues during the whole execution of the program where the OS keeps
removing idle pages from the main memory and write them onto the secondary memory and
bring them back when required by the program.
67
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
=2+e–a
Memory Protection
Memory protection implemented by associating protection bit with each frame
Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal
page “invalid” indicates that the page is not in the process’ logical address space
Valid (v) or Invalid (i) Bit In A Page Table
Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers,
window systems).
Shared code must appear in same location in the logical address space of all processes
Private code and data
Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address space
Shared Pages Example
68
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
69
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
where pi is an index into the outer page table, and p2 is the displacement within the page of the
outer page table
p
Page number page offset
pi p2 d
12 10 10
Address-Translation Scheme
70
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
71
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Segmentation
Memory-management scheme that supports user view of
memory A program is a collection of segments
A segment is a logical unit such as:
main program
Procedure
function method
object
local variables, global variables
common block
stack
symbol table
arrays
72
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Segmentation Architecture
Logical address consists of a two tuple:
o <segment-number, offset>,
Segment table – maps two-dimensional physical adp drh
esy
ses
s;iecaa
chl tam
bleeem
ntro
y rhy
as: space
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR
Protection
With each entry in segment table associate:
validation bit = 0 Þ illegal
segment read/write/execute
privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation
problem A segmentation example is shown in the following diagram
Segmentation Hardware
73
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Example of Segmentation
74
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Virtual Memory
Virtual Memory is a space where large programs can store themselves in form of pages while
their execution and only the required pages or portions of processes are loaded into the main
memory. This technique is useful as large virtual memory is provided for user programs when
a very small physical memory is there.
In real scenarios, most processes never need all their pages at once, for following reasons :
Error handling code is not needed unless that specific error occurs, some of which
are quite rare.
Arrays are often over-sized for worst-case scenarios, and only a small fraction of the
arrays are actually used in practice.
Certain features of certain programs are rarely used.
Fig. Diagram showing virtual memory that is larger than physical memory.
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.
75
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
3. More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
Demand Paging
A demand paging is similar to a paging system with swapping(Fig 5.2). When we want to execute a
process, we swap it into memory. Rather than swapping the entire process into memory.
When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again Instead of swapping in a whole process, the pager brings only those necessary pages into
memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap
time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those
pages that are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can be checked
checking the bit and marking a page will have no effect if the process never attempts to access the pages.
While the process executes and accesses pages that are memory resident, execution proceeds normally.
Fig. Transfer of a paged memory to continuous disk space
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating
system's failure to bring the desired page into memory.
Initially only those pages are loaded which will be required the process immediately.
The pages that are not moved into the memory, are marked as invalid in the page table. For an
76
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
invalid entry the rest of the table is empty. In case of pages that are loaded in the memory, they
are marked as valid along with the information about where to find the swapped out page.
When the process requires any of the page that is not loaded into the memory, a page fault trap is
triggered and following steps are followed,
1. The memory address which is requested by the process is first checked, to verify the
request made by the process.
2. If its found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a
free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from disk to the specified
memory location. ( This will usually block the process on an I/O wait, allowing some other
process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the
new frame number, and the invalid bit is changed to valid.
77
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
6. The instruction that caused the page fault must now be restarted from the beginning.
There are cases when no pages are loaded into the memory initially, pages are only loaded when
demanded by the process by generating page faults. This is called Pure Demand Paging.
The only major issue with Demand Paging is, after a new page is loaded, the process starts
execution from the beginning. Its is not a big issue for small programs, but for larger programs
it affects performance drastically.
Advantages of Demand Paging:
1. Large virtual memory.
2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into the
memory. This allows us to get more number of processes into the memory at the same time.
but what happens when a process requests for more pages and no free memory is available to
bring them in. Following steps can be taken to deal with this problem :
1. Put the process in the wait queue, until any other process finishes its execution
thereby freeing frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk to get free
frames. This technique is called Page replacement and is most commonly used. We have
some great algorithms to carry on page replacement efficiently.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating System decides
which memory pages to swap out, write to disk when a page of memory needs to be allocated.
Paging happens whenever a page fault occurs and a free page cannot be used for allocation
purpose accounting to reason that pages are not available or the number of free pages is lower
than required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has
to read in from disk, and this requires for I/O completion. This process determines the quality
of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the
algorithm.
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the total
78
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
number of page misses, while balancing it with the costs of primary storage and processor time
of the algorithm itself. There are many different page replacement algorithms. We evaluate an
algorithm by running it on a particular string of memory reference and computing the number
of page faults,
Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference.
The latter choice produces a large number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire address.
If we have a reference to a page p, then any immediately following references
to page p will never cause a page fault. Page p will be in memory after the first reference; the
immediately following references will not fault.
For example, consider the following sequence of addresses − 123,215,600,1234,76,96
If page size is 100, then the reference string is
1,2,6,12,0,0 First In First Out (FIFO) algorithm
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at
the head.
79
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
80
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
81
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
wasted. Therefore, other allocation scheme can be used to give available memory to each
process according to its size. This is called, proportional allocation. Let the size of the
virtual memory for process pi be si, the number of frames allocated to the process p i be ai,
and define S = ∑ si
If the total number of available frames is m, then ai can be
calculated: ai = (si/S) * m.
Of course ai must be adjust to be a integer, greater than the minimum number of frames
required by the instruction set with a sum not exceeding m.
In both of theses cases, the number of frames allocated to each process may vary according to
the multiprogramming level; say l. If l increases, each process will lose some of the allocated
frames to provide memory needed for the new process. Otherwise, the frames allocated to the
departed process can be now spread over the remaining processes.
Within these two allocation schemes, a high-priority process is treated the same as low-
priority process. By definition, it is desirable to give more memory to high-priority process
to speed up its execution.
3. Replacement Scope:
When its necessary to find free page frames, what set of pages should become candidates for
replacement?
Local replacement policies replace pages that belong to the process that needs the new frame.
Global policies consider all unlocked frames. Most systems use global replacement
because it is easy to implement, has minimal overhead, and performs reasonably well.
82
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
83
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
UNIT-4
File System Interface - The Concept of a File, Access methods, Directory Structure, File System Mounting,
File Sharing, Protection, File System Implementation - File System Structure, File System Implementation,
Allocation methods, Free-space Management, Directory Implementation, Efficiency and Performance.
Mass Storage Structure - Overview of Mass Storage Structure, Disk Structure, Disk Attachment, Disk
Scheduling, Disk Management, Swap space Management
File System
File Concept :
Computers can store information on various storage media such as, magnetic disks,
magnetic tapes, optical disks. The physical storage is converted into a logical storage unit
by operating system. The logical storage unit is called FILE. A file is a collection of similar
records. A record is a collection of related fields that can be treated as a unit by some
application program. A field is some basic element of data. Any individual field contains a
single value. A data base is collection of related data.
Student name, Marks in sub1, sub2, Fail/Pass are fields. The collection of fields is called a
RECORD. RECORD:
LAKSH 93 92 P
Collection of these records is called a data file.
FILE ATTRIBUTES :
1. Name : A file is named for the convenience of the user and is referred by its
name. A name is usually a string of characters.
2. Identifier : This unique tag, usually a number ,identifies the file within the file system.
3. Type : Files are of so many types. The type depends on the extension of the file.
FILE OPERATIONS
1. Creating a file : Two steps are needed to create a file. They are:
Check whether the space is available ornot.
If the space is available then made an entry for the new file in the
directory. The entry includes name of the file, path of the file,etc…
2. Writing a file : To write a file, we have to know 2 things. One is name of the
file and second is the information or data to be written on the file, the system searches the
entired given location for the file. If the file is found, the system must keep a write pointer
to the location in the file where the next write is to take place.
3. Reading a file : To read a file, first of all we search the directories for the file, if
the file is found, the system needs to keep a read pointer to the location in the file where the
next read is to take place. Once the read has taken place, the read pointer is updated.
4. Repositioning within a file : The directory is searched for the appropriate
entry and the current file position pointer is repositioned to a given value. This operation
is also called file seek.
5. Deleting a file : To delete a file, first of all search the directory for named
file, then released the file space and erase the directoryentry.
6. Truncating a file : To truncate a file, remove the file contents only but, the
attributes are as itis.
FILE TYPES:The name of the file split into 2 parts. One is name and second is
Extension. The file type is depending on extension of the file.
85
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
FILE STRUCTURE
File types also can be used to indicate the internal structure of the file. The operating system
requires that an executable file have a specific structure so that it can determine where in
memory to load the file and what the location of the first instruction is. If OS supports
multiple file structures, the resulting size of OS is large. If the OS defines 5 different file
structures, it needs to contain the code to support these file structures. All OS must support
at least one structure that of an executable file so that the system is able to load and run
programs.
In UNIX OS, defines all files to be simply stream of bytes. Each byte is individually
addressable by its offset from the beginning or end of the file. In this case, the logical
record size is 1 byte. The file system automatically packs and unpacks bytes into physical
disk blocks, say 512 bytes per block.
The logical record size, physical block size, packing determine how many logical records are
in each physical block. The packing can be done by the user’s application program or OS. A
file may be considered a sequence of blocks. If each block were 512 bytes, a file of 1949
bytes would be allocated 4 blocks(2048 bytes). The last 99 bytes would be wasted. It is
called internal fragmentation all file systems suffer from internal fragmentation,the larger the
block size, the greater the internal fragmentation.
Files stores information, this information must be accessed and read into computer
memory. There are so many ways that the information in the file can be accessed.
86
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Information in the file is processed in order i.e. one record after the other.
Magnetic tapes are supporting this type of file accessing.
Direct access:
Direct access is also called relative access. Here records can read/write randomly without
any order. The direct access method is based on a disk model of a file, because disks allow
random access to any file block.
Eg : A disk containing of 256 blocks, the position of read/write head is at 95th block. The
block is to be read or write is 250th block. Then we can access the 250th block directly
without any restrictions.
The main disadvantage in the sequential file is, it takes more time to access a Record
.Records are organized in sequence based on a key field.
Eg :
A file consisting of 60000 records,the master index divide the total records into 6 blocks,
each block consisiting of a pointer to secondary index.The secondary index divide the 10,000
records into 10 indexes.Each index consisting of a pointer to its orginal location.Each record
in the index file consisting of 2 field, A key field and a pointer field.
87
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
DIRECTORY STRUCTURE
Sometimes the file system consisting of millions of files,at that situation it is very hard to
manage the files. To manage these files grouped these files and load one group into one
partition.
directory.
thename.
6. Traverse the file system : We need to access every directory and every file
with in a directory structure we can traverse the file system
88
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
E.g :- If user 1 creates a files caled sample and then later user 2 to creates a file
called sample,then user2’s file will overwrite user 1 file.Thats why it is not used in
the multi user system.
89
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
The problem in single level directory is different user may be accidentally use the
same name for their files. To avoid this problem each user need a private directory,
Names chosen by one user don't interfere with names chosen by a different user.
Root directory is the first level directory.user 1,user2,user3 are user level of
directory A,B,C are files.
Two level directory eliminates name conflicts among users but it is not satisfactory
for users with a large number of files.To avoid this create the sub-directory and load
the same type of files into the sub-directory.so, here each can have as many
directories are needed.
90
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
1. Absoulte path
2. Relative path
Absoulte path : Begging with root and follows a path down to specified files
giving directory, directory name on the path.
4. Acyclic graphdirectory
Multiple users are working on a project, the project files can be stored in a comman
sub-directory of the multiple users. This type of directory is called acyclic graph
directory .The common directory will be declared a shared directory. The graph
contain no cycles with shared files, changes made by one user are made visible to
other users.A file may now have multiple absolute paths. when shared directory/file
is deleted, all pointers to the directory/ files also to be removed.
91
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
DOS
UNIX adopts a different approach .UNIX allows to uproot the whole directory
structure under D2 : mount (or) graft it on D1 : under a specific directory -say
Y by using a mount command. After mounting, the directory structure looks as
92
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Similarly, a file system can be unmounted and separated into another independent
file system with a separate root directory.
File Sharing
In a Single User System, the concept of File Sharing is not needed. Only one
user needs files stored on a computer. In Multi user scenario, there are
multiple users accessing Multiple Computers. In Such a case, the need for
accessing the files stored on other computers is felt many times. Here file
sharing comes intopicture.
93
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Remote file sharing methods have changed. The first method involves manually
transferring files between machines via FTP. The second method uses a distributed
file system (DFS) in which remote directories are visible from a local machine. The
third method, the WWW. A Browser is needed to gain access to remote files and
separate operations are used to transfer files.
FTP is used for both anonymous and authentication access. Anonymous access
allows a user to transfer files without having an account on the remote system.
WWW uses Anonymous File exchange. DFS involves a much tighter integration
between the machine that is accessing the remote files and the mutual providing the
files.
Remote file systems, the machine containing the files is the server, and the machine
seeking access to the files is the client. The server declares that a resource is available
to clients and specifies exactly which files and exactly which clients. A server can
94
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
serve multiple clients and a client can use multiple servers. The server usually
specifies the available files on directory level. A client can be specified by an IP
address. But these can be imitated as a result of imitating; an unauthorized client
could be allowed to access the server. More secure Solutions include secure
Authentication of the client via encrypted Keys. Once remote file system mounted,
file operation requests are sent to server. Server checks if the user have credentials to
access the file. The request is either allowed/denied. If allowed the file is return to
client, client can read, write, other operations. The client closes the file when access is
completed.
DNS (Domain name system), we can visit a website by typing in the domain name
rather than the IP address (67.43.14.98). DNS translates domain names into IP
address, allowing to access an internet location by its domain name. Before DNS
became wide spread ,files containing the same information were sent via emails (or)
FTP between all networked hosts.
In the case of Microsoft common Internet File System (CIFS) Network information
is used in conjunction with user authentication (User name, password) to create a
network login that the server uses to decide whether to allow (or) deny access to a
requested file system.
Failure Models
Local file systems can fail for a variety of reasons, failure of disk corruption of the
directory Structure, cable failure……User(or) System administrator failure can also
cause files to be lost. Human intervention will be required to repair the damage.
remote file system have even more failure modes because of the complexity of the
network systems and the required interaction between remote machines.
In the case of networks, the network can be interrupted between 2 hosts, such
interruptions can result from h/w failure, poor h/w configuration etc.
Consider, a crash of the server, suddenly the remote file system is no longer
reachable. The system can either terminate all operations to the lost server (or) delay
operations until the server is again reachable.
If both server, client maintain knowledge of the current activities, then they can
recover from failure.
Consistency Semantics
Consistency semantics represent an important criterion for evaluating any file system
that supports file sharing. These semantics specify how multiple users of a system
are to access a shared file simultaneously. They specify when modifications of data
by one user will be observable by other users. These semantics are typically
implemented as code with the file system.
Protection
Types of Access
Protection mechanisms provide controlled access by limiting types of file access that
can be made. Access is permitted/denied depending on several factors, one of which
is the type of access requested.
Access Control
Most common approach to the protection problem is to make access dependent on the
identity of the user different users may need different types of access to a file. An
access control list (ACL) specifying user names and types of access file, OS checks
the list (ACL) associated with that file. If that user is listed for the requested access,
the access is allowed. Otherwise protection violation occurs, and user process is
denied access to the file.
96
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Disadvantages
a) No of passwords that a user needs to remember may becomelarge.
b) If only one password is used for all files, then once it is discovered,
all files areaccessible.
c) Some systems allow a user to associate a password with a sub
directory rather than individual file.
File system structure:
Disk provides the bulk of secondary storage on which a file system is maintained.
They have 2 characteristics that make them a convenient medium for storing multiple
files.
1. A disk can be rewritten in place. It is possible to read a block from
the disk, modify the block, and write it back into sameplace.
2. A disk can access directly any block of information itcontains.
Application Programs
I/O Control
Devices
97
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
I/O Control: consists of device drivers and interrupt handlers to transfer information
between the main memory and the disk system. The device driver writes specific bit
patterns to special locations in the I/O controller’s memory to tell the controller
which device location to act on and what actions to take.
The Basic File System needs only to issue commands to the appropriate device
driver to read and write physical blocks on the disk. Each physical block is
identified by its numeric disk address (Eg. Drive 1, cylinder 73, track2, sector 10).
The File Organization Module knows about files and their logical blocks and
physical blocks. By knowing the type of file allocation used and the location of the
file, file organization module can translate logical block address to physical
addresses for the basic file system to transfer. Each file’s logical blocks are
numbered from 0 to n. so, physical blocks containing the data usually do not match
the logical numbers. A translation is needed to locate eachblock.
The Logical File System manages all file system structure except the actual data
(contents of file). It maintains file structure via file control blocks. A file control
block (inode in Unix file systems) contains information about the file, ownership,
permissions, location of the file contents.
Overview:
A Boot Control Block (per volume) can contain information needed by the system to
boot an OS from that volume. If the disk does not contain an OS, this block can
beempty.
A Volume Control Block (per volume) contains volume (or partition) details, such as
number of blocks in the partition, size of the blocks, a free block, count and free
block pointers, free FCB count, FCB pointers.
98
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
A Directory Structure (per file system) is used to organize the files. A PER-FILE
A file has been created; it can be used for I/O. First, it must be opened. The open( )
call passes a file name to the logical file system. The open( ) system call First searches
the system wide open file table to see if the file is already in use by another process. If
it is ,a per process open file table entry is created pointing to the existing system wide
open file table. If the file is not already open, the directory structure is searched for the
given file name. Once the file is found, FCB is copied into a system wide open file
table in memory. This table not only stores the FCB but also tracks the number of
processes that have the fileopen.
Next, an entry is made in the per – process open file table, with the pointer to the entry
in the system wide open file table and some other fields. These are the fields include a
pointer to the current location in the file ( for the next read/write operation) and the
access mode in which the file is open. The open () call returns a pointer to the
appropriate entry in the per-process file system table. All file operations are preformed
via this pointer. When a process closes the file the per- process table entry is removed.
And the system wide entry open count is decremented. When all users that have
opened the file close it, any updated metadata is copied back to the disk base directory
99
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
System wide open file table contains a copy of the FCB of each open file,
other information. Per process open file table, contains a pointer to the
appropriate entry in the system wide open file
table, other information.
100
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Directory Implementation
Linear list of file names with pointer to the data blocks
o Simple to program
o Time-consuming to
execute Linear search time
Could keep ordered alphabetically via linked list or use B+ tree
Hash Table – linear list with hash data
structure o Decreases directory search time
o Collisions – situations where two file names hash to the same location
o Only good if entries are fixed size, or use chained-overflow method
101
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Linked
Linked allocation – each file a linked list of
blocks o File ends at nil pointer
o No external fragmentation
o Each block contains pointer to next block
o No compaction, external fragmentation
o Free space management system called when new block needed
o Improve efficiency by clustering blocks into groups but
increases internal fragmentation
o Reliability can be a problem
o Locating a block can take many I/Os
and disk seeks FAT (File Allocation Table)
variation
o Beginning of volume has table, indexed by block number
o Much like a linked list, but faster on disk and cacheable
102
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
File-Allocation Table
Indexed allocation
o Each file has its own index block(s) of pointers to its data blocks
Free-Space Management
File system maintains free-space list to track available
blocks/clusters Linked list (free list)
o Cannot get contiguous space easily
o No waste of space
o No need to traverse the entire list (if # free blocks recorded
103
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Grouping
Modify linked list to store address of next n-1 free blocks in first free block, plus a
pointer to next block that contains free-block-pointers (like this one).
Counting
Because space is frequently contiguously used and freed, with contiguous- allocation
allocation, extents, or clustering.
Keep address of first free block and count of following free
blocks. Free space list then has entries containing addresses
and counts.
104
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Magnetic disks: Magnetic disks provide the bulk of secondary storage for modern
computer system . each disk platter has a flat circular shape, like a CD. Common platter
diameters range from 1.8 to 5.25 inches. The two surfaces of a platter are covered with a
magnetic material. We store information by it magnetically on the platters.
A read /write head files just above each surface of every platter. The heads are attached to
a disk arm that moves all the heads as a unit. The surface of a platter is logically divided
into circular tracks, which are sub divided into sectors. The set of tracks that are at one
arm position makes up a cylinder. There may be thousands of concentric cylinders in a
disk drive, and each track may contain hundreds of sectors.
When the disk in use, a driver motor spins it at high speed. Most drivers rotate 60 to
200 times per second. Disk speed has 2 parts. The transfer rate is the at which data flow
between the drive and the computer. To read/write, the head must be positioned at the
desired track and at the beginning of the desired sector on the track, the time it takes to
position the head at the desired track is called seek time. Once the track is selected the
disk controller waits until desired sector reaches the read/write head. The time it takes
to reach the desired sector is called latency time or rotational dealy-access time. When
the desired sector reached the read/write head, then the real data transferring starts.
105
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
A disk can be removable. Removable magnetic disks consist of one platter, held in a
plastic case to prevent damage while not in the disk drive. Floppy disks are in expensive
removable magnetic disks that have a soft plastic case containing a flexible platter. The
storage capacity of a floppy disk is 1.44MB.
A disk drive is attached to a computer by a set of wires called an I/O bus. The data
transfer on a bus are carried out by special processors called controllers. The host
controller is the controller at the computer end of the bus. A disk controller is built into
each disk drive . to perform i/o operation, the host controller operates the disk drive
hardware to carry out the command. Disk controllers have built in cache, data transfer at
the disk drive happens b/w cache and disk surface. Data transfer at the host, occurs b/w
cache and host controller.
Magnetic Tapes: magnetic tapes was used as an early secondary storage medium. It is
permanent and can hold large amount of data. It access time is slow compared to main
memory and magnetic disks. Tapes are mainly used for back up, for storage of
infrequently used information. Typically they store 20GB to 200GB.
Disk Structure: most disks drives are addressed as large one dimensional arrays of
logical blocks. The one dimensional array of logical blocks is mapped onto the sectors
of the disk sequentially. sector 0 is the fist sector of the first track on the outermost
cylinder. The mapping proceeds in order through that track, then through the rest of the
tracks in that cylinder, and then through the rest of the cylinder from outermost to inner
most. As we move from outer zones to inner zones, the number of sectors per track
decreases. Tracks in outermost zone hold 40% more sectors then innermost zone. The
number of sectors per track has been increasing as disks technology improves, and the
outer zone of a disk usually has several hundred sectors per track. Similarly, the number
of cylinders per disk has been increasing; large disks have tens of thousands of
cylinders.
Disk attachment
1 .Host attached storage : host attached storage are accessed via local I/O ports. The
desktop pc uses an I/O bus architecture called IDE. This architecture supports maximum
of 2 drives per I/O bus. High end work station and servers use SCSI and FC.
SCSI is an bus architecture which have large number of conductor’s in a ribbon cable
(50 or 68) scsi protocol supports maximum of 16 drives an bus. Host consists of a
controller card (SCSI Initiator) and upto 15 storage device called SCSI targets.
106
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Fc(fiber channel) is the high speed serial architecture. It operates mostly on optical
fiber (or) over 4 conductor copper cable. It has 2 variants. One is a large switched
fabric having a 24-bit address space. The other is an (FC-AL) arbitrated loop that can
address 126 devices.
A wide variety of storage devices are suitable for use as host attached.( hard disk,cd
,dvd,tape devices)
2.Network-attached storage: A(NAS) is accessed remotely over a data network .clients access
network attached storage via remote procedure calls. The rpc are carried via tcp/udp over an ip
network-usually the same LAN that carries all data traffic to theclients.
NAS CLIENT
LAN/WAN
CLIENT
NAS
NAS provides a convenient way for all the computers on a LAN to share a pool of storage
with the same ease of naming and access enjoyed with local host attached storage .but it
tends to be less efficient and have lower performance than direct attached storage.
3.Storage area network: The drawback of network attached storage(NAS) is storage I/O
operations consume bandwidth on the data network. The communication b/w servers and
clients competes for bandwidth with the communication among servers and storagedevices.
Disk scheduling algorithms are used to allocate the services to the I/O requests on the
disk . Since seeking disk requests is time consuming, disk scheduling algorithms try to
minimize this latency. If desired disk drive or controller is available, request is served
immediately. If busy, new request for service will be placed in the queue of pending
requests. When one request is completed, the Operating System has to choose which
pending request to service next. The OS relies on the type of algorithm it needs when
dealing and choosing what particular disk request is to be processed next. The objective
of using these algorithms is keeping Head movements to the amount as possible. The less
the head to move, the faster the seek time will be. To see how it works, the different disk
scheduling algorithms will be discussed and examples are also provided for better
understanding on these different algorithms.
107
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
It is the simplest form of disk scheduling algorithms. The I/O requests are served or
processes according to their arrival. The request arrives first will be accessed and served
first. Since it follows the order of arrival, it causes the wild swings from the innermost
to the outermost tracks of the disk and vice versa . The farther the location of the request
being serviced by the read/write head from its current location, the higher the seek time
will be.
Example: Given the following track requests in the disk queue, compute for the Total
Head Movement (THM) of the read/write head :
Consider that the read/write head is positioned at location 50. Prior to this track location
199 was serviced. Show the total head movement for a 200 track disk (0-199).
Solution:
Assuming a seek rate of 5 milliseconds is given, we compute for the seek time
=644 * 5 ms
108
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
This algorithm is based on the idea that that he R/W head should proceed to the track that
is closest to its current position . The process would continue until all the track requests
are taken care of. Using the same sets of example in FCFS the solution are as follows:
Solution:
= 236 * 5ms
In this algorithm, request is serviced according to the next shortest distance. Starting at
50, the next shortest distance would be 62 instead of 34 since it is only 12 tracks away
from 62 and 16 tracks away from 34 . The process would continue up to the last track
request. There are a total of 236 tracks and a seek time of 1,180 ms, which seems to be a
better service compared with FCFS which there is a chance that starvation3 would take
place. The reason for this is if there were lots of requests closed to each other, the other
requests will never be handled since the distance will always be greater .
3. SCAN SchedulingAlgorithm
This algorithm is performed by moving the R/W head back-and-forth to the innermost
and outermost track. As it scans the tracks from end to end, it process all the requests
found in the direction it is headed. This will ensure that all track requests, whether in the
outermost, middle or innermost location, will be traversed by the access arm thereby
109
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
finding all the requests. This is also known as the Elevator algorithm. Using the same sets
of example in FCFS the solution are as follows:
Solution:
This algorithm works like an elevator does. In the algorithm example, it scans down
towards the nearest end and when it reached the bottom it scans up servicing the
requests that it did not get going down. Ifa request comes in after it has been scanned, it
will not be serviced until the process comes back down or
moves back up. This process moved a total of 230 tracks and a seek time of 1,150.
This is optimal than the previous algorithm.
This algorithm is similar to SCAN algorithm except for the end-to-end reach of each
sweep. The R/W head is only tasked to go the farthest location in need of servicing. This
is also a directional algorithm, as soon as it is done with the last request in one direction
it then sweeps in the other direction. Using the same sets of example in FCFS the
solution are as follows:
110
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Solution:
This algorithm is a modified version of the SCAN algorithm. C-SCAN sweeps the disk
from end-to-end, but as soon it reaches one of the end tracks it then moves to the other
end track without servicing any requesting location. As soon as it reaches the other end
track it then starts servicing and grants requests headed to its direction. This algorithm
improves the unfair situation of the end tracks against the middle tracks. Using the same
sets of example in FCFS the solution are as follows:
111
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Notice that in this example an alpha3 symbol (α) was used to represent the dash line.
This return sweeps is sometimes given a numerical value which is included in the
computation of the THM . As analogy, this can be compared with the carriage return
lever of a typewriter. Once it is pulled to the right most direction, it resets the typing
point to the leftmost margin of the paper . A typist is not supposed to type during the
movement of the carriage return lever because the line spacing is being adjusted . The
frequent use of this lever consumes time, same with the time consumed when the R/W
head is reset to its starting position.
= 50 + 137 + 20 (THM)
= 207 tracks
The computation of the seek time excluded the alpha value because it is not an actual
seek or search of a disk request but a reset of the access arm to the starting position .
112
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Disk management
Boot block:-
When a computer is powered up -it must have an initial program to run. This initial
bootstrap program initializes all aspects of the system, from CPU registers to device
controllers, and the contents of main memory, and then starts the OS. To do its job, the
bootstrap program finds the OS kernel on disk, loads that kernel into memory and jumps
to an initial address to begin the OS execution. For most computers, the bootstrap is
stored in ROM. This location is convenient, because ROM needs no initialization and is
at a fixed location that the CPU can start executing when powered up, ROM is read only,
it cannot be infected by computer virus. The problem is that changing this bootstrap code
requires changing the ROM hardware chips. For this reason, most systems store a tiny
bootstrap loader program in the boot ROM whose job is to bring in a full bootstrap
program from disk. The full bootstrap program is stored in the boot blocks at a fixed
location on the disk. A disk that has a boot partition is called a boot disk or system disk.
The code in the boot ROM instructs the disk controller to read the boot blocks into
memory and then starts executing that code.
113
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Bad blocks:-
A Block in the disk damaged due to the manufacturing defect or virus or physical
damage. This defector block is called Bad block. MS-DOS format command, scans the
disk to find bad blocks. If format finds a bad block, it tells the allocation methods not to
use that block. Chkdsk program search for the bad blocks and to lock them away. Data
that resided on the bad blocks usually are lost. The OS tries to read logical block 87.
The controller calculates ECC and finds that the sector is bad. It reports this finding to
the OS. The next time the system is rebooted, a special command is run to tell the SCS
controller to replace the bad sector
with a spare.
After that, whenever the system requests logical block 87, the request is translated into
the replacement sectors address by the controller.
Sector slipping:-
Logical block 17 becomes defective and the first available spare follows sector 202.
Then, sector slipping remaps all the sectors from 17 to 202, sector 202 is copied into the
spare, then sector 201 to 202, 200 to 201 and so on. Until sector 18 is copied into sector
19. Slipping the sectors in this way frees up the space of sector 18.
System that implements swapping may use swap space to hold an entire process image,
including the code and data segments. Paging systems may simply store pages that have
been pushed out of main memory. Note that it may be safer to overestimate than to
underestimate the amount of swap space required, because if a system runs out of swap
space it may be forced to abort processes. Overestimation wastes disk space that could
otherwise be used for files, but it does no other harm. Some systems recommend the
amount to be set aside for swap space. Linux has suggested setting swap space to
double the amount of physical memory. Some OS allow the use of multiple swap
spaces. These swap spaces as put on separate disks so that load placed on the (I/O)
system by paging and swapping can be spread over the systems I/O devices.
A Swap space can reside in one of two places. It can be carved out of normal file system
(or) it can be in a separate disk partition. If the swap space is simply a large file, within
the file system, normal file system methods used to create it, name it, allocate its space. It
is easy to implement but inefficient. External fragmentation can greatly increase
swapping times by forcing multiple seeks during reading/writing of a process image. We
can improve performance by caching the block location information in main memory and
by using special tools to allocate physically contiguous blocks for the swap file.
Alternatively, swap space can be created in a separate raw partition. a separate swap
space storage manager is used to allocate
114
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
/deal locate the blocks from the raw partition. this manager uses algorithms optimized for
speed rather than storage efficiency. Internal fragmentation may increase but it is
acceptable because life of data in swap space is shorter than files. since swap space is
reinitialized at boot time, any fragmentation is short lived. the raw partition approach
creates a fixed amount of swap space during disk partitioning adding more swap space
requires either repartitioning the disk (or) adding another swap space elsewhere.
115
OPERATING SYSTEM NOTES III YEAR/I SEM
MRCET
UNIT-5
Deadlocks - System Model, Deadlock Characterization, Methods for Handling Deadlocks,
Deadlock Prevention, Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.
Protection - System Protection, Goals of Protection, Principles of Protection, Domain of
Protection, Access Matrix, Implementation of Access Matrix, Access Control, Revocation of
Access Rights, Capability-Based Systems, Language-Based Protection.
DEADLOCKS
System model:
A system consists of a finite number of resources to be distributed among a number of
competing processes. The resources are partitioned into several types, each consisting of
some number of identical instances. Memory space, CPU cycles, files, I/O devices are
examples of resource types. If a system has 2 CPUs, then the resource type CPU has 2
instances.
A process must request a resource before using it and must release the resource after using
it. A process may request as many resources as it requires to carry out its task. The number
of resources as it requires to carry out its task. The number of resources requested may not
exceed the total number of resources available in the system. A process cannot request 3
printers if the system has only two.
A process may utilize a resource in the following sequence:
(I) REQUEST: The process requests the resource. If the request cannot be granted
immediately (if the resource is being used by another process), then therequesting process
must wait until it can acquire theresource.
(II) USE: The process can operate on the resource .if the resource is a printer, the
process can print on theprinter.
(III) RELEASE: The process release theresource.
For each use of a kernel managed by a process the operating system checks that the process
has requested and has been allocated the resource. A system table records whether each
resource is free (or) allocated. For each resource that is allocated, the table also records the
process to which it is allocated. If a process requests a resource that is currently allocated to
another process, it can be added to a queue of processes waiting for this resource.
To illustrate a deadlocked state, consider a system with 3 CDRW drives. Each of 3 processes
holds one of these CDRW drives. If each process now requests another drive, the 3 processes
will be in a deadlocked state. Each is waiting for the event “CDRW is released” which can be
caused only by one of the other waiting processes. This example illustrates a deadlock
involving the same resource type.
Deadlocks may also involve different resource types. Consider a system with one printer and
one DVD drive. The process Pi is holding the DVD and process Pj is holding the printer. If Pi
requests the printer and Pj requests the DVD drive, a deadlock occurs.
116
OPERATING SYSTEM NOTES III YEAR/I SEM
MRCET
DEADLOCK CHARACTERIZATION:
In a deadlock, processes never finish executing, and system resources are tied up, preventing
other jobs from starting.
NECESSARY CONDITIONS:
A deadlock situation can arise if the following 4 conditions hold simultaneously in a
system:
1. MUTUAL EXCLUSION: Only one process at a time can use the resource. If
another process requests that resource, the requesting process must be delayed until
theresource has beenreleased.
2. HOLD AND WAIT: A process must be holding at least one resource and
waitingto acquire additional resources that are currently being held by otherprocesses.
3. NO PREEMPTION: Resources cannot be preempted. A resource can be released
only voluntarily by the process holding it, after that process has completed itstask.
4. CIRCULAR WAIT: A set {P0,P1,…..Pn} of waiting processes must exist such that
P0 is waiting for resource held by P1, P1 is waiting for a resource held by P2,……,Pn-1 is
waiting for a resource held by Pn and Pn is waiting for a resource held byP0.
RESOURCE ALLOCATION GRAPH
Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph. This graph consists of a set of vertices V and a set of edges E. the
set of vertices V is partitioned into 2 different types of nodes:
P = {P1, P2….Pn}, the set consisting of all the active processes in the system. R=
{R1, R2….Rm}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi ->Rj. It signifies that
process Pi has requested an instance of resource type Rj and is currently waiting for that
resource.
A directed edge from resource type Rj to process Pi is denoted by Rj ->Pi, it signifies
that an instance of resource type R j has been allocated to process Pi.
A directed edge Pi ->Rj is called a requested edge. A directed
edge Rj->Piis called an assignmentedge.
We represent each process Pi as a circle, each resource type Rj as a rectangle. Since resource
type Rj may have more than one instance. We represent each such instance as a dot within the
rectangle. A request edge points to only the rectangle Rj. An assignment edge must also
designate one of the dots in therectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resource allocation graph. When this request can be fulfilled, the request edge is
instantaneously transformed to an assignment edge. When the process no longer needs access
to the resource, it releases the resource, as a result, the assignment edge is deleted.
117
OPERATING SYSTEM NOTES III YEAR/I SEM
MRCET
The sets P, R, E:
P= {P1, P2, P3}
R= {R1, R2, R3, R4}
E= {P1 ->R1, P2 ->R3, R1 ->P2, R2 ->P2, R2 ->P1, R3 ->P3}
118
OPERATING SYSTEM NOTES III YEAR/I SEM
MRCET
Processes P1, P2, P3 are deadlocked. Process P2 is waiting for the resource R3, which is held
by process P3.process P3 is waiting for either process P1 (or) P2 to release resource R2. In
addition, process P1 is waiting for process P2 to release resource R1.
119
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources
Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources
o Require process to request and be allocated all its resources
before it begins execution, or allow process to request resources only
when the process has none
o Low resource utilization; starvation possible
No Preemption –
o If a process that is holding some resources requests another resource
that cannot be immediately allocated to it, then all resources currently
being held are released
o Preempted resources are added to the list of resources for which
the process is waiting
o Process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information available
Simplest and most useful model requires that each process declare the maximum number
of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes .
Safe State
When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes in the systems such that for each P i, the resources that Pi can still
request can be satisfied by currently available resources + resources held by
all the Pj, with j <I
That is:
o If Pi resource needs are not immediately available, then Pi can wait until all
Pjhave finished
o When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate
o When Pi terminates, Pi +1 can obtain
its needed resources, and so on If a system is in safe state
no deadlocks
If a system is in unsafe state possibility of deadlock
Avoidance ensure that a system will never enter an unsafe state
Avoidance algorithms
Single instance of a resource type
120
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount
of time Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource type
121
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Rjavailable
Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k
instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated
k instances of Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of
Rjto complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively. Initialize: Work = Available
Finish [i] = false fori = 0, 1, …,n- 1
2. Find an isuch that both:
(a) Finish [i] = false
(b) Needi£Work
If no such iexists, go to step 4
3. Work =
Work + Allocationi
Finish[i] = true
go to step 2
4. IfFinish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti[j] = k then process Pi wants k
instances of resource type Rj
1. If Requesti£Needigo to step 2. Otherwise, raise error condition,
since process has exceeded its maximum claim
2. If Requesti£Available, go to step 3. Otherwise Pi must wait, since
resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available –
Request; Allocationi=
Allocationi + Requesti;
Needi=Needi – Requesti;
o If safe the resources are allocated to Pi
o If unsafe Pi must wait, and the old resource-allocation state is restored
122
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
123
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
P1 Request (1,0,2)
Check that Request £ Available (that is, (1,0,2) £ (3,3,2) true
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 2 600
P3 2 1 1 011
P4 0 0 2 431
Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety requirement
Deadlock Detection
Allow system to enter
deadlock state Detection
algorithm
Recovery scheme
Single Instance of Each Resource Type
Maintain wait-
for graph
Nodes are
processes
PiÆPjif Piis
waiting forPj
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
An algorithm to detect a cycle in a graph requires an order of n2 operations,
where n is the number of vertices in the graph
Resource-Allocation Graph and Wait-for Graph
Detection Algorithm
Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if
Allocationiπ 0, then Finish[i] =
false; otherwise, Finish[i] = true
2. Find an index isuch that both:
(a) Finish[i] == false
(b) Requesti£Work
If no such i exists, go to step 4
3. Work =
Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1 £i£n, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked
Recovery from Deadlock:
Process Termination
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle is
eliminated In which order should we choose to abort?
o Priority of the process
o How long process has computed, and how much longer to completion
o Resources the process has used
o Resources process needs to complete
o How many processes will need to be terminated
o Is process interactive or batch?
Resource Preemption
Selecting a victim – minimize cost
Rollback – return to some safe state, restart process for that state
Starvation – same process may always be picked as victim, include number of
rollback in cost factor
PROTECTION
Goals of Protection:
In one protection model, computer consists of a collection of objects,
hardware or software
Each object has a unique name and can be accessed through a well-
defined set of operations
Protection problem - ensure that each object is accessed correctly and only by
those processes that are allowed to do so
Principles of Protection
Guiding principle – principle of least privilege
o Programs, users and systems should be given just enough
privileges to perform their tasks
o Limits damage if entity has a bug, gets abused
o Can be static (during life of system, during life of process)
125
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Domain Structure
Access-right = <object-name, rights-set>
where rights-set is a subset of all valid operations that can be performed on the
object Domain = set of access-rights
126
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Access Matrix
Our model of protection can be viewed abstractly as a matrix, called an access matrix.
The rows of the access matrix represents domains, columns represent objects. Each
entry in the matrix consists of a set of access rights. The entry access(i,j) defines the set
of operations that a process executing in domain Di can invoke on object Oj.
`There are 4 domains and 4 objects- 3 files(F1,F2,F2) And one printer. A process
executing in domain D1 can read files F1 and F3. A process executing in domain D4
has the same privileges as one executing in domain D1, but in addition, it can also write
onto files F1 and F3. Note that the printer can be accessed only by a process executing
in domain D2.
The access matrix can be implement policy decisions concerning protection. The
policy decisions involve which rights should be included in the (i,j)th entry. We must
also decide the domain in which each process executes.
When a user creates a new object Oj, the column Oj is added to the access matrix with
the appropriate initialization entries.
Processes should be able to switch from one domain to another. Switching from
domain Di to domain Dj is allowed if only if the access right; switch belongs to access
(i,j). a process executing in domain D2 can switch to domain D3 or to domain D4. A
process in domain D4 can switch to D1 and one in domain D1 can switch to D2.
Access matrix design separates mechanism from policy
o Mechanism
Operating system provides access-matrix + rules
If ensures that the matrix is only manipulated by authorized agents and that rules are strictly enforced
o Policy
User dictates policy Who can access what object and in what mode But doesn’t solve
the general confinement problem
127
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
The ability to copy an access from one domain of the access matrix to another is
denoted by an asterisk(*) appended to the access right. The copy right allows the access
right to be copied only with in the column for which the right is defined . a process
executing in domain D2 can copy the read operation into any entry associated with file
F2.
This schme has 2 variants:
A right is copied from access(i,j) to access (k,j) ; it is then removed from access (i,j).
this action is a transfer of a right, rather than a copy.
128
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
Propagation of the copy right may be limited. When the right R* is copied from access (i,j)
to access (k,j) only the right R (not R*) is created. A process executing in domain Dk
cannot further copy the right R.
Access Control
130
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
131
OPERATING SYSTEM NOTES III YEAR/I SEM MRCET
133