0% found this document useful (0 votes)
9 views

OPERATING SYSTEMS

The document provides an overview of operating systems, detailing their role as system software that manages computer hardware and offers services to users and applications. It covers various types of operating systems, their components, capabilities, and design principles, as well as process management and inter-process communication mechanisms. Additionally, it discusses different structural approaches to operating system design, including monolithic systems, layered approaches, and microkernel systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

OPERATING SYSTEMS

The document provides an overview of operating systems, detailing their role as system software that manages computer hardware and offers services to users and applications. It covers various types of operating systems, their components, capabilities, and design principles, as well as process management and inter-process communication mechanisms. Additionally, it discusses different structural approaches to operating system design, including monolithic systems, layered approaches, and microkernel systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 34

OPERATING SYSTEMS

SYSTEM SOFTWARE
- Consists of programs designed to perform tasks associated with directly controlling and utilizing computer hardware.
It includes:
a) Operating systems
b) Computer Language Oriented software (Translator)
c) Utilities (Data Recovery programs, data conversion programs)
OPERATING SYSTEMS
Is a core set of programs that control and supervise the hardware of a computer and provide services to other system
software, application software, programmers and users of a computer.
It acts as an intermediary between users of computer hardware.

Abstract view of components of a computer system

User
System & Application Programs
Compiler, Assembler, Word Processing, Database systems, Spread Sheets
Operating system

Computer Hardware

N/B – Operating system and computer architecture have had a great deal of influences on each other.

Classes of operating systems


They can be designed as:
a) Proprietary o/s – designed for use by specific computer architecture. e.g. MS DOS, PC DOS that runs on IBM
and compatible computers using the Intel series of microprocessors.
b) Generic o/s – designed for use by a wide variety of computer architectures e.g. UNIX that runs on both Intel
and Motorola Microprocessors

Parts of an operating System


i) Control Programs – manage hardware and resources
 Main program which is the Supervisor /Monitor / Executive/Kernel for controlling all other o/s programs
as well as other applications
 Job Control Language – is a portion of an o/s that allows a user to specify what a job requires in terms of
computer resources and o/s services. It is a user’s language of communication with an o/s e.g. commands
like DIR, ERASE.
ii) Services Programs – external o/s programs that provide a service to the user or programmer of a computer
(perform routine but essential functions such as preparing a disk for use and copying files from one location to
another).

Operating System Capabilities (“Types of o/s”)


a) A particular o/s may incorporate one or more of these capabilities:
b) Single User processing – only one user at a time to access a computer e.g. DOS
c) Multi-user processing – allows 2 or more users to access a computer at the same time. Actual no of users depends
on the hardware and o/s design e.g. UNIX
d) Single Tasking – allows one program to execute at a time and that program must finish executing before the next
program can begin e.g. DOS.
e) Context Switching – allows several programs to reside in the memory but only one to be active at a time. Active
program is in the foreground and others in the background.
f) Multitasking / Multiprogramming – allows single CPU to execute what appears to be more than one program at a
time. The CPU switches its attention between 2 or more programs in main memory as it receives requests for
processing from one program and the other. It happens so quickly that the programs appear to execute
simultaneously / executing concurrently.
g) Multiprocessing / Parallel processing – allows the simultaneous or parallel execution of programs by a computer
that has 2 or more CPU’s
h) Multithreading – support several simultaneous functions with the same application.
i) Inter-processing / Dynamic linking – allows any change made in one application to be automatically reflected in
any related linked applications e.g. a link between word processing and financial applications (linked together).
j) Time sharing – allows multiple users to access a single computer found on a large computer o/s where many users
need access at the same time.
k) Virtual storage – o/s with the capability of virtual storage called virtual memory, allows you to use a secondary
storage device as an extension of main memory.
l) Real Time Processing – allows a computer to control or to monitor the task performances of other machines and
people by responding to input data in a specified amount of time. To control processes immediate response is
usually necessary.
m) Virtual Machine (VM) Processing – creates the illusion that there is more than one physical machine when in fact
there is only one. Such programming allows several users of a computer to operate as if each had the only terminal
attached to the computer. Thus, users feel as if each is on a dedicated computer and has sole use of the CPU and
I/O devices. When a VM operating system is loaded, each user chooses the o/s that is compatible with his or her
intended application program to the VM o/s. Thus the V/M o/s gives users flexibility and allows them to choose
o/s that best suits their needs.

Operating system functions


a) Process Management
b) Main Memory Management
c) File / Secondary storage Management
d) Input /Output (Device) System management
e) Protection System (Security)

ASSIGNMENT ONE
1) Explain the following terms as used by operating systems
a) Kernel
b) Interrupt
2) Explain briefly System calls and outline major categories that they may be grouped.
OPERATING SYSTEM DESIGN AND IMPLEMENTATION
Design and Implementation of OS is not “solvable”, but some approaches have proven successful. Internal structure
of different Operating Systems can vary widely. The design starts by defining goals and specifications and this is
affected by choice of hardware and type of system.
In guiding the design operating systems the user and system goals are taken into consideration.
 User goals –operating system should be convenient to use, easy to learn, reliable, safe, and fast.
 System Goals – operating systems should be easy to design, implement, and maintain, as well as flexible, reliable,
error- free, and efficient
Important principles to separate during design are:
 Policy: What will be done? (decide what will be done)
 Mechanism: How to do it? (determine how to do something)
The separation of policy from mechanism is a very important principle since it allows maximum flexibility if policy
decisions are to be changed later.

Types of structures
a) Simple Structure / Monolithic systems
It has also been subtitled “The Big Mess” i.e. no structure where the o/s is viewed as a collection of procedures which
can call each other. There is no information hiding (procedures can view each other) and basically suggests a basic
structure for the o/s which includes:
 Main Program – invokes service procedure
 A set of service procedures – which carry system calls.
 A set of utility procedures that help the service procedures
Example of MS DOS
MS-DOS –written to provide the most functionality in the least space. It is not divided into modules. Although MS-
DOS has some structure, its interfaces and levels of functionality are not well separated (no information hiding).

b) Layered Approach
The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer
(layer 0), is the hardware; the highest (layer N) is the user interface. With modularity, layers are selected such that each
uses functions (operations) and services of only lower-level layers.

Examples of layers developed by


Dijkstra (1968) in Netherlands
Layer Function
(5) Operator
(4) User Programs
(3) Input / Output
Management
(2) Operations / Process
communication
(1) Memory and Device
Management
(0) Process allocation and
Example is UNIX which had a simple layered structure consisting of:
 System programs
 The kernel
i) Consists of everything below the system-call interface and above the physical hardware
ii) Provides the file system, CPU scheduling, memory management, and other operating-system
functions; a large number of functions for one level

c) Modules
Most modern operating systems implement kernel modules. It uses object-oriented approach.
Each core component is separate and each talks to the others over known interfaces. Each is loadable as needed within
the kernel. Overall it is similar to layers but with more flexible

Solaris Modular approach


e) Virtual Machines
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system
kernel as though they were all hardware. A virtual machine provides an interface identical to the underlying bare
hardware. The operating system creates the illusion of multiple processes, each executing on its own processor with its
own (virtual) memory.
The resources of the physical computer are shared to create the virtual machines. CPU scheduling can create the
appearance that users have their own processor. Spooling and a file system can provide virtual card readers and virtual
line printers. A normal user time-sharing terminal serves as the virtual machine operator’s console

(a) Non virtual machine (b) virtual machine


The virtual-machine concept provides complete protection of system resources since each virtual machine is isolated
from all other virtual machines. This isolation, however, permits no direct sharing of resources.
A virtual-machine system is a perfect vehicle for operating-systems research and development. System development is
done on the virtual machine, instead of on a physical machine and so does not disrupt normal system operation.
The virtual machine concept is difficult to implement due to the effort required to provide an exact duplicate to the
underlying machine. Example is LINUX implementation. The technique is also implemented in the Java Virtual
Machine.

f) Exokernels
Gives each user a clone of actual computer, but with a subset of resources. Thus one Virtual Machine might get disk
blocks 0 to 1023, the next one might get block 1024 to 2047 and so on. At bottom layer a kernel mode program runs
(exokernel) which allocates resources to V.M and monitors utilization.

g) Client-Server Model / Microkernel System Structure


Involves moving code to higher levels even further and remove as much as possible from the o/s leaving a minimal
kernel i.e. implementing most of the o/s system functions in user processes (client process sends a request to server
process).

Client Client Process Terminal File Memory


Process Process Server Server
..... Server Server
User Mode

Kernel Mode
Kernel Process

The kernel handles communication between clients and servers. The server processes run as user mode processes and
not in kernel mode, they do not have direct access to the hardware (hard for hardware to crash). It has also been
adapted to distributed systems.

Machine 1 Machine 2
Client File Server
Kernel Kernel
NETWORK

Message from client to server


Benefits:
 Easier to extend a microkernel
 Easier to port the operating system to new architectures
 More reliable (less code is running in kernel mode)
 More secure

Detriments:
 Performance overhead of user space to kernel space communication
PROCESS MANAGEMENT
A process can be thought as a program in execution. It needs certain resources including CPU time, memory, file and
I/O devices to accomplish its task.

The o/s is responsible for the following activities in connection with process management:
 Creation and deletion of both user and system processes
 Suspension and resumption of processes
 Provision of mechanism for process synchronization
 Provision of mechanism for process communication
 Provision for mechanism for deadlock handling.

Process Model
A process is more than a program. It includes current activity temporary data containing global value. A program is a
passive entity e.g. contents of a file stored on disk where as a process is an entity.
Process state
As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. It
states may be:
i) New – process is being created
ii) Running – Instructions are being executed
iii) Waiting – process waiting for some event to occur.
iv) Ready – waiting to be assigned to a processor
v) Terminated – finished execution.

Diagram of process state

Error: Reference source not found

INTER-PROCESS COMMUNICATION
O/s must provide inter-process communication facility (IPC) which provides a mechanism to allow processes to
communicate and to synchronize their actions to prevent Race conditions(a condition where several processes access
and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which
the access takes place).

To guard against this mechanisms are necessary to ensure that only one process at a time can be manipulating the data.
Some of this mechanisms / techniques include:
Critical Section Problem.
It is a segment of code in which a process may be changing common variables, updating a table, writing a file etc. Each
process has its critical section while the other portion is referred to as Reminder section.

The important feature of the system is that when one has process is executing in its critical section no other process is
to be allowed to execute its critical section. Each process must request permission to enter its critical section.
A solution to critical section problem must satisfy the following:
i) Mutual Exclusion – If one process is executing its critical section then no other process can be executing in
their critical section.
ii) Progress – If no process is executing in its critical section and there exists some processes that wish to enter
critical sections then those processes that are not executing in their remainder section can be allowed.
i) Bounded Waiting – There must exist a bound on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request is
granted (Prevents Bus waiting).
Busy waiting occurs when one process is being executed in its critical section and process 2 tests to confirm if
process 1 is through. It is only acceptable technique when the anticipated waits are brief; otherwise it could
waste CPU cycles.

Techniques used to handle Critical section problem


1. Semaphores
The fundamental principle is this: 2 or more processes can operate by means of simple signals, such that a process can
be forced to stop at a specified place until it has received a special signal.
Any complex coordinative requirement can be satisfied by the appropriate structure of signals.
For signaling special variables called semaphores are used: To transmit a signal via semaphores, a process executes the
primitive wait P(s); if the corresponding signal V(s) has not yet been transmitted the process is suspended until
transmission takes place.

To achieve the desired effect we can view the semaphore as a variable that has an integer value upon which 3
operations are defined
a) A semaphore may be initialized to a non-negative value
b) The wait operation decrements these semaphore value. If the value becomes negative the process executing the
wait is blocked.
c) The signal operation increments the semaphore value. If the value is not positive then a process is blocked by
await operation is unblocked
A binary semaphore accepts only 2 values 0 or 1. For both semaphore and binary semaphore a queue is used to hold
processes waiting on the semaphore.
A strong semaphore has a mechanism of removing processes from a queue e.g. FIFO policy.
A weak semaphore doesn’t specify the order in which processes are removed from the queue.
Example of implementation:
Wait (S);
Critical Section
Signal (S);
2. Message passing
Provides a means for co-operating processes to communicate when they are not using shared memory environment.
Processes generally send and receive messages by using system calls such as: send (receiverprocess; message) or
Receive (senderprocess; message).

A blocking send must wait for receiver to receive the message. A non-blocking send enables the sender to continue
with other processing even if the receiver has not yet received the message. This requires buffering mechanisms to hold
messages until the receiver receives it.
Message passing can be flawless on some computers but in a distributed system it can be flawed or even lost, thus
acknowledgement protocols are used.

One complication in distributed systems with send / receive message passing is in naming processes unambiguously so
that send and receive calls references the proper process.
Process creation and destruction can be coordinated through some centralized naming mechanisms but this can
introduce considerable transmission overhead as individual machines request permission to use new names.
3. Monitors
It is a high synchronization construct characterized by a set of programmer defined operations. It is made of
declarations of variables and bodies of procedures or functions that implement operations of a given type. It cannot be
used directly by the various processes; ensures that only one process at a time can be active within a monitor.
Schematic view of a monitor

Processes desiring to enter the monitor when it is already in use must wait. This waiting is automatically managed by
the monitor. Data inside the monitor is accessible only to the process inside it. There is no way for processes outside
the monitor to access monitor data. This is called information hiding.
Event Counters
Introduced to enable process synchronization without use of mutual exclusion. . It keeps track of no of occurrences of
events of a particular class of related events. It is an integer counter that does not decrease. Some operations will allow
processes to reference event counts e.g. Advance (event Count), Read (event count) and await (event count value).
Advance (E) –signals the occurrence of an event of the class of events represented by E by incrementing event count E
by 1.
Read (E) – obtains value of E; because Advance (E) operations may be occurring during Read (E) it is only guaranteed
that the value will be at least as great as E was before the read started.
Await (E) – blocks the process until the value of E becomes at least V; this avoids the need for busy waiting.
Other techniques used include:
Using Peterson’s algorithm, Dekker’s algorithm and Bakery’s Algorithm

PROCESS SCHEDULING
It is the process of switching the CPU among processes; it makes the computer more productive and is the basis of
multiprogrammed o/s. Normally several processes are kept in memory and when one process has to wait, the o/s takes
the CPU away from that process and gives the CPU to another process.
Modules of the o/s involved include
 CPU Scheduler
 CPU dispatcher
CPU Scheduler – selects a process from the ready queue that is to be executed. Scheduling decisions may take place
under the following four conditions:
a) When process switches from running state to waiting state (e.g. I/O request or invocation of wait for the
termination of one of the child processes).
b) When a process switches from running state to the ready state (e.g. when an interrupt occurs)
c) When a process switches from the waiting state to the ready state (e.g. completion of I/O)
d) When a process terminates.

Scheduling scheme can either be:


(i) Non pre-emptive – process keeps CPU until when it release it (conditions 1 and 4)
(ii) Pre-emptive – CPU can be taken from a process when it is midst of updating or processing data by another
process (conditions 2 and 3)
Operating system has many schedulers but the three main schedulers are
 LONG-TERM SCHEDULER
This scheduler is responsible for selecting the process from secondary storage device like a disk and loads them into
the main memory for execution. At is also known as a job scheduler. The primary objective of long-term scheduler is
to provide a balance of mix of job to increase the number of processes in a ready queue. Long term scheduler provide a
good performance by selecting processes with a combination of I/O and CPU bound type.
Long-term scheduler selects the jobs from the batch-queue and loads them into the memory. In memory, these
processes belong to ready queue.
 SHORT-TERM SCHEDULER
It allocates processes belonging to ready queue to CPU for immediate processing. Its main Objective is to maximize
CPU utilization. This scheduler has to work more frequently and faster since it must select a new process quite often
because CPU executes a process only for a few milliseconds before it goes for I/O operation.
 MEDIUM-TERM
Many a times, we need to remove a process from the main memory as this process may be in need of some I/O
operation. So, such processes may be removed ( or suspended) from the main memory to the hard disk. Later on, these
processes can be reloaded into the main memory and continued from where it was left earlier. This saving of suspended
process is said to be rolled-out. And this swapping (in and out) is done by the medium-term scheduler.
CPU dispatcher – a module that gives control of CPU to the process selected by the scheduler. Functions include:
(i) Switching context
(ii) Switching to user mode
(iii) Jumping to proper location in the user program to restart that program.

Scheduling levels
Three important levels of scheduling are considered.
a) High level scheduling (Job Scheduling) – determines which jobs should be allowed to compete actively for the
resources of the system. Also called Admission Scheduling. Once admitted jobs become processes or groups of
processes.
b) Intermediate level scheduling – determines which processes shall be allowed to compete for the CPU.
c) Low level scheduling – determines which ready process will be assigned the CPU when it next becomes
available and actual assigns the CPU to this process.

Different CPU scheduling algorithms are available, thus in determining which the best is for a particular situation many
criteria have been suggested that can be used to compare them. They include:
(i) CPU utilization – keeps the CPU as busy as possible.
(ii) Throughput – measure of work i.e. no of processes that are completed per time unit.
(iii) Turn around time – The interval from time of submission to time of completion. It is sum of periods spent
waiting to get into memory, waiting in the ready queue, executing in CPU and doing I/O.
(iv) Waiting Time – Sum of periods spent waiting in ready queue.
(v) Response time – Measure of time from submission of a request by user until the first response is produced. It
is amount of time it takes to start responding but not time that it takes to output that response.
(vi) I/O Bounded-ness of a process – when a process gets the CPU, does it use the CPU only briefly before
generating an I/O request?
(vii) CPU Bounded-ness of a process – when a process gets the CPU does it tend to use the CPU until its time
quantum expires.

Scheduling Algorithms
1. First Come First Served (FCFS) / First In First Out (FIFO) Scheduling
Process that request CPU first is allocated the CPU first. Code for FCFS is simple to write and understand. Average
waiting time under FCFS is often quite long. FCFS is non-preemptive discipline (once a process has the CPU it runs to
completion).
Example
Processes arrive at time o, with length of the CPU-burst time given in milliseconds.
Process Burst Time
(Time of CPU use)
P1 24
P2 3
P3 3

If processes arrive in order P1, P2, P3; Using a Gantt chart.

P1 P2 P3
0 24 27 30

Average waiting Time = (0 + 24 +27) / 3 = 17 ms

If processes arrive in order P2, P3, P1


P2 P3 P1
0 3 6 30
Average waiting Time = (0 + 3 + 6) / 3 = 3 ms

Thus Average waiting time under FCFS policy is generally not minimal and may vary substantially if processes CPU-
Burst times vary greatly. FCFS algorithm is particularly troublesome for time sharing systems.
2. Shortest Job First (SJF) Scheduling
When the CPU is available it is assigned to the process that has the smallest next CPU burst. If 2 processes have the
same length next CPU Burst, FCFS scheduling is used to break the tie. It reduces Average waiting time as compared to
FCFS.
Example
Process Burst Time (ms)
P1 6
P2 8
P3 7
P4 3

Gantt chart for SJF:


P4 P1 P3 P2
0 3 9 16 24
Average waiting Time = (0 + 3 + 16 + 9) / 4 = 7 ms

Real difficulty of SJF is knowing the next CPU request and this information is not usually available. NB: SJF is non-
preemptive in nature.

3. Shortest Remaining Tine (SRT) Scheduling


It is the preemptive counterpart of SJF and is useful in time sharing. Currently executing process is preempted when a
process with a short CPU burst is ready in a queue.
Example
Process Arrival Time Burst Time
(ms)
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

Process Waiting Time


P1 0 + (10 – 1) = 9
P2 (1 – 1) = 0
P3 17 – 2 = 15
P4 5 – 3 =2
Average waiting time = (9 + 0 + 15 + 2) / 4 = 6.5ms

P1 is started at time 0 since it’s the only process in the queue. P2 arrives at time 1. The remaining time for P1 (7ms) is
larger than time for process P2 (4ms) so P1 is preempted and P2is scheduled. Average waiting time is less as compared
to non preemptive SJF.
SRT has higher overhead than SJF since it must keep track of elapsed service time of the running job and must handle
occasional preemptions.

4. Round Robin (RR) Scheduling


Similar to FCFS scheduling but processes are given a limited amount of CPU time called a time slice or a quantum
which is generally from 10 to 100ms. If the process has a CPU burst of less than 1time Quantum, the process itself will
release the CPU voluntarily. The processes ready queue is treated as a circular queue and CPU scheduler goes around
the queue, allocating the CPU to each process for a time of up to 1 time quantum. Using processes in example above
and Time Quantum of 4 Ms.

Gantt chart:

P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26

Average waiting time = (9 + 0 + 15 + 2) / 4 = 6.5ms


Process Waiting Time
P1 0 + (16 – 4) = 12
P2 4–1=3
P3 (8 – 2) + (20 – 12) + (25 – 24) = 15
P4 (12 – 3) + (24 – 16) = 17

RR is effective in time sharing environments. The pre-emptive overheads are kept low by efficient context switching
mechanisms and providing adequate memory for the processes to reside in main storage.
Selfish Round ROBIN is a variant introduced by Klein Rock i.e. as processes enter the system they first reside in a
holding queue until their priorities reach the levels of processes in an active queue.

5. Priority Scheduling
SJF is a special case of general priority scheduling algorithms. A priority is associated with each process and CPU is
allocated to the process with highest priority. Equal priority processes are scheduled in FCFS order. Priorities are
generally some fixed range of numbers e.g. 0 – 7 or 0 – 4095. Some system use low numbers to represent low
priority; others use low numbers for high priority. Priorities can be defined either internationally or externally.
a) For internally defined priorities the following measurable quantities are used – time limits, memory
requirements, no of open files, ration of average I/O burst to CPU burst.
b) Externally defined priorities are set by criteria that are eternal to o/s e.g. importance of process, type and
amount of funds being paid for computer use, department sponsoring the work and other political factors.
Priority scheduling can either be preemptive or non preemptive.
Major problem is indefinite blocking or starvation: low priority jobs may wait indefinitely for CPU.
Solution: Aging – technique of gradually increasing the priority of process that waits in the system for a long time.

6. Multi-level Queue Scheduling


Used in a situation in which processes are easily classified into different groups e.g. foreground (interactive) processes
and background (batch) processes. Ready queue is divided into these groups.
Processes are permanently assigned to one queue generally based on some property of the process such as memory
size, process priority or process type. Each queue has its own scheduling algorithm e.g. foreground -> RR, Background
->FCFS. In addition there must be scheduling between the queues which is commonly implemented as a fixed priority
scheduling e.g. fore ground queue may have priority over back ground queue.

7. Multi-level Feedback queue scheduling


Allows a process to move between queues. The idea is to separate processes with different CPU burst characteristic. If
a process uses too much CPU time it will be moved to a lower priority queue. This scheme leaves I/O bound and
interactive processes in the highest priority queues. Similarly a process that waits too long in a lower priority queue
may be moved to a higher priority queue. This form of aging prevents starvation.

Exercises
1. Consider the following set of processes, with the length of CPU burst time given in milliseconds.
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, and P5 all at time 0.
Draw for Gantt charts illustrating the execution of these processes using FCFS, SJF, Priority (1 is lowest priority) and
RR (quantum = 1ms) scheduling. Calculate average waiting time for each algorithm.

2. Assume that we have the workload shown. All 5 processes arrive at time 0 in order given with the length f burst
time given in ms.
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12
Consider FCFS, SJF and RR (quantum = 10ms) scheduling algorithms for this set of processes, which algorithm would
give the minimum average waiting time.

DEADLOCKS
Occurs when several processes compete for a finite number of resources i.e. a process is waiting for a particular event
that will not occur. The event here may be resource acquisition and release.

Example of Deadlocks
1. A traffic deadlock – vehicles in a busy section of the city.
2. A simple resource Deadlock
R1
Held by Requests

P2
P2

R2 Held by
Requests

This system is deadlocked because each process holds a resource being requested by the other process and neither
process is willing to release the resource it holds. (Leads to deadly embrace).

3. Deadlock in spooling systems


A spooling system is used to improve system throughput by disassociating, a program from the slow operating speeds
of devices such as printers e.g. lines of text are sent to a disk before printing starts. If disk space is small and jobs are
many a dead lock may occur.
Deadlocks Characterization
Four necessary conditions for deadlock to exist:
1. Mutual Exclusion condition – a process claims exclusive control of resources they require.
2. Wait for Condition (Hold and Wait) –Processes hold resources already allocated to them while waiting for
additional resources.
3. No preemption condition – resources cannot be removed from the processes holding them until the resources are
used to completion.
4. Circular wait condition – A circular chain of processes exists in which each process holds one or more resources
that are requested by the next process in the chain

MAJOR AREAS OF DEADLOCK RESEARCH IN COMPUTER SCIENCE


a) Deadlock Detection
b) Deadlock Recovery
c) Deadlock Prevention
d) Deadlock Avoidance

DEADLOCK DETECTION
This is the process of actually determining that a deadlock exists and of identifying the processes involved in the
deadlock. (i.e. determine if a circular wait exists). To facilitate detection of deadlocks resource allocation graphs are
used which indicate resource allocation and requests. This graphs change as process request resources, acquire them
and eventually release them to the o/s.
Reduction of Resource Allocation Graphs IS a technique useful for detecting deadlocks i.e. processes that may
complete their execution and the processes that will remain deadlocked are determined. If a process’s resource request
may be granted then we say that a graph may be reduced by that process (arrows connecting process and resource is
removed).
If a graph can be reduced by all its processes then the irreducible processes constitute the set of deadlocked processes
in the graph.
NB: - the order in which the graph reductions are performed does not matter; the final result will always be the same.
Notations of Resource Allocation and request graphs.
P1 R1

P1 is requesting a resource of type R1

R2
P2
1
A resource of type R2 has been allocated to
process P2

R3
P3
P4 Process P3 is requesting resource R3 which has
been allocated to process P4

R3

P3 P4 Circular wait (deadlock) process P5 has been


allocated R5 which is requested by P6 that has been
allocated R4; that is being requested by process P5

R3

Graph Reduction
Given the Resource Allocation graph can you determine the possibility of a deadlock?

I. Reducing by P9
R6
R6
P8
P8

R7
R7
P7 P9
P7 P9

II. Reducing by P7 III. Reducing by P8

R6 R6
P8 P8

R7 R7
P7 P9 P7 P9

No deadlock since circular wait doesn’t exist.

NB: Deadlock detection algorithm should be invoked at less frequent intervals to reduce overhead in computation time.
If it is invoked at arbitrary points there may be many cycles in resource graph and it would be difficult to tell which of
the many deadlocked processes “caused” the deadlock.

DEADLOCK RECOVERY
Once a deadlock as been detected several alternatives exist:
a) Ostrich Algorithm (Bury head in the sand and assume things would just work out).
b) Informing the operator who will deal with it manually
Let the system recover automatically
There are 2 options for breaking a deadlock:

1. Process termination – killing some processes


Methods available include:
a) Abort all deadlocked processes – has a great expense
b) Abort one process at a time until the deadlock cycle is eliminated.
A scheduling algorithm may be necessary to identify the process to abort.
Factors that may determine which process is chosen;
 Priority of the process
 Time spent and time remaining
 Number and type of resources the process has used (is it simple to preempt)
 No of resources it need to complete
 Is the process interactive or batch.
2. Resource Preemption
Preempts some resources from processes and give these resources to other processes until the deadlock cycle is broken.
Issues to be addressed include:-
 Selecting a victim i.e. a resource to preempt.
 Roll Back – a process that is preempted a resource needs to be rolled back to some safe state and restarted
from that state once the deadlock is over.
 Starvation – guarantee that resources will not always be preempted from some process.

DEADLOCK PREVENTION
Aims at getting rid of conditions that cause deadlock. Methods used include:
a) Denying Mutual exclusion – should only be allowed or non-shareable resources. For shareable resources
e.g. read only files the processes’ attempting to open it should be granted simultaneous access. Thus for
shareable we do not allow mutual exclusion.
b) Denying Hold and wait (wait for condition) – To deny this we must guarantee that whenever a process
requests a resource it does not hold any other resources. Protocols used include:
i) Requires each process to request and be allocated all its resources before it begins execution.
While waiting for resources to be available it should not hold any resource – may lead to serious
waste or resources.
ii) A process only requests for a resource when it has none or it has to release all resources before
getting a resource.
Disadvantages of these protocols
 Resource utilization is low since many of the resources may be allocated but unused for a
long period.
 Starvation is possible – A process that needs several popular resources may have to wait
indefinitely because at least one of the resources that it needs is always allocated to some
other process.
c) Denying “No preemption”
If a process that is holding some resource requests another resource that cannot be immediately allocated to it (i.e.
it must wait) then all resources currently being held are preempted. The process will be started only when it can
regain its old resources as well as the new ones that it is requesting.

d) Denying Circular wait


All resources are uniquely numbered and processes must request resources in linear ascending order. It has been
implemented in many o/s but has some difficulties i.e.
- Addition of new resources required rewriting of existing program so as to give the unique numbers.
- Jobs requiring resources in a different order from that one implemented by o/s; it required resources to be
acquired and held possibly long before they are actually used (leads to waste)
- Affects a user’s ability to freely and easily write applications code.
DEADLOCK AVOIDANCE
The famous deadlock avoidance is Dijkstra’s Bankers Algorithm. Its goal is to improve less stringent (constraining)
conditions than in deadlock prevention in an attempt to get better resource utilization. Avoidance does not precondition
the system to remove all possibility of deadlock to loom, but whenever a deadlock is approached it is carefully
sidestepped. NB: used for same type of resources e.g. tape drives or printers.

Dijkstra’s Bankers Algorithm says allocation of a resource is only done when it results in a safe state rather than in
unsafe states. A safe state is only in which the total resource situation is such that all users would eventually be able to
finish. An unsafe state is one that might eventually lead to a deadlock.
Example of a safe state
Assume a system with 12 equivalent tape drives and 3 users sharing the drives:
State I
Users Current Maximum
Loan Need
User(1) 1 4
User(2) 4 6
User(3) 5 8
Available 2
The state is ‘safe’ because it is still possible for all 3 users to finish i.e. the remaining 2 drives may be given to
users(2) who may run to completion after which six tapes would be released for user(1) and user(3). Thus the key to a
state being safe is that there is at least one way for all user to finish.
Example of an unsafe state
State II
Users Current Maximum
Loan Need
User(1) 8 10
User(2) 2 5
User(3) 1 3
Available 1
A 3-way deadlock could occur if indeed each process needs to request at least one more drive before releasing any
drives to the pool.
NB: An unsafe state does not imply the existence of a deadlock of a deadlock. What an unsafe state does imply is
simply that some an unfortunate of events might lead to a deadlock.

Example of safe state to unsafe state transition


State III (Safe)
Users Current Maximum
Loan Need
User(1) 1 4
User(2) 4 6
User(3) 5 8
Available 2
If User (3) requests an additional resource

State IV (Unsafe)

Users Current Maximum


Loan Need
User(1) 1 4
User(2) 4 6
User(3) 6 8
Available 1

Therefore State IV is not necessarily deadlocked but the state has gone from a safe one to an unsafe one.
Thus Dijkstra’s Bankers Algorithm, the mutual exclusion, wait-for and No-preemption conditions are allowed but
processes do not claim exclusive use of the resources they require.

Weaknesses in the Banker’s Algorithm


Some serious weaknesses that might cause a designer to choose another approach to the deadlock problem.
- Requires a fixed no of resources to allocate and this cannot be guaranteed due to maintenance etc.
- Requires that the population of users remain fixed
- Requires that the Banker grant all requests within a finite time.
- The algorithm requires that clients (i.e. jobs) to repay all loans (i.e. return all resources) within a finite time.
- Requires that users state their max need in advance. Sometimes this may be difficult.

EXERCISE
In the context of Dijkstra’s Bankers Algorithm discuss whether each of the following states is safe or unsafe. If a state
is safe show how it is possible for all processes to complete. If a state is unsafe show how it is possible for a deadlock
to occur.
State A

Users Current Maximum


Loan Need
User(1) 1 4
User(2) 4 6
User(3) 5 8
User(4) 0 2
Available 1

State B

Users Current Maximum


Loan Need
User(1) 4 8
User(2) 3 8
User(3) 5 8
Available 2
MEMORY MANAGEMENT
It is necessary in order to improve both utilization of the CPU and the speed of computer’s response to its users.
Memory allocation techniques
1. Paging
Main memory (physical memory) is broken into fixed blocks called frames. The processes (logical memory) are also
broken into blocks of same size called pages. When a process is executed its pages are loaded into any available
memory frames from the backing storage. Allocations of frames to processes are monitored by Free-frame list and page
table.
Example
Before Allocation After Allocation
Free frame List Free frame List
14 131415161 Pg 0Pg
13 718192021 115161
18 7Pg
20 219Pg
15 Page 0
Page 1 321
Page 2
Page 3
Page 0 New Process
Page 1 New process page
Page 2 table
Page 3 Page Frame No
14
13
New Process 18
20

Advantages
 Minimize fragmentation
Disadvantages
 Page mapping hardware may lead to high cost of computer
 Memory used to store various tables may be big
 Some memory may remain unused if it is not sufficient for a frame.

2. Segmentation
Processes are divided into variable sized segments determined by sizes of process’s program, sub-routines, data
structures e.t.c A segment is a logical grouping of information such as a routine, an array or data area that are
determined by programmer. Each has a name and a length.
Example: User’s view of a program
Sgmt
1400 0Sgmt
Subroutine Stack 2400 3Sgmt
Segment 3 2Sgmt
Segment 0 3200 4Sgmt 1
Symbol Table
LimitBase100014 4300 Main
Memory
004006300110032 4700
Sqrt 0010004700Segm
Segment 4 ent Table 5700
6300
6700
Segment 1 Main Program
Segment 2
Logical Address space
(Program in backing store)

Advantages
 Eliminates internal fragmentation
 Allows dynamic growth of segments
 Facilitates the loading of only one copy of shared routines.
Disadvantages
 Considerable compaction overheads incurred in order to support dynamic growth and eliminate fragmentation.
 Maximum segment size is limited to the size of main memory
 Can cause external fragmentation when all blocks of free memory are too small to accommodate a segment.

3. Partitioned Allocation
Ensures the main memory accommodates both o/s and the various user processes i.e. memory is divided into 2
partitions one for resident o/s and one for user processes. It protects o/s code and data from changes (accidental or
malicious) by user processes.
The o/s takes into account the memory requirements of each process and amount of available memory space in
determining which processes are allocated memory.
Example
0
OS Job Queue
400 K Process Memory Time
P1 600 K 10
P2 1000 K 5
P3 300 K 20
2160 K P4 700 K 8
P5 500 K 15

2560 K

0 OS 0 OS 0 OS 0 OS 0 OSP5
P1 P1 P1 P1
400 K 400 K 400 K 400 K 400 K P4
P4 P1 P4 P5 P3
Terminates Allocated
P2 P3 P3
900 K
1000K P3 1000K P3 1000K 1000K
P4 1000K
P2
Allocated
Terminates
1700K 1700K
2000K 2000K 2000K 2000K
1700K
2000K
2300K 2300K 2300K 2300K
2300K
2560K 2560K 2560K 2560K
2560K
Advantages
 Eliminates fragmentation and makes it possible to allocate more partitions which allows higher degree of
multiprogramming
Disadvantages
 Relocation h/ware increase cost of computers
 Compaction time may be substantial
 Job partition size is limited to the size of limited memory

4. Overlays
Used when process size is larger than amount of memory allocated (keeps in memory only those instructions and data
that are needed at any given time)
Example: A 2-pass assembler
During Pass 1, it constructs a symbol table and in pass 2 it generates machine language code. Thus the assembler can
be partitioned into Pass 1 code, pass 2 code, symbol table and common support routines used by both Pass 1 and Pass
2.
Assuming sizes:
Pass 1 70K
Pass 2 80 K
Symbol Table 20 K
Common Routines 30 K
Loading everything requires 200 K of memory. If 150 K is available; Pass 1and Pass 2 do not need to be in memory at
the same time. Thus 2 overlays can be defined i.e.
 Overlay A (Symbol table, common routines and pass1)
 Overlay B (Symbol table, common routines and pass 2)
An overlay driver (10 K) is needed to manage the switching between overlays.

Symbol
20 K
TableCom
mon 30 K
routinesO
10 K
verlay
Driver
70 K Pass 1 Pass 2 80 K

Advantages
 Do not require any special support from the o/s i.e. can be implemented by the users with simple file structures
Disadvantages
 Programmers must design and program an overlay structure properly

5. Swapping
User programs do not remain in main memory until completion. In some systems one job occupies the main storage
once. That job runs then when it can no longer continue it relinquishes both storage and CPU to the next job.

Thus the entire storage is dedicated to one job for a brief period, it is then removed (i.e. swapped out or rolled out) and
next job is brought in (swapped in or rolled in). A job will normally be swapped in and out many times before it is
completed. It guarantees reasonable response times

VIRTUAL MEMORY
It is a technique where a part of secondary storage is addressed as main memory. It allows execution of processes that
may not be completely in memory i.e. programs can be larger than memory. Virtual memory techniques:
1. Paging
Processes reside on secondary storage memory (usually a disk) and when they are to be executed they are swapped into
memory. Swapping is not for entire process but pages that are needed. (It uses a lazy swapper).
With this scheme we need some form of hardware support to distinguish between those pages that are in memory and
those pages that are on the disk.
2. Segmentation
It eliminates hardware overheads. The o/s allocates memory in segments rather than pages. It keeps track of these
segments through segment descriptors which include information about segments, size, protections and location.
A program doesn’t need to have its entire segment in memory to execute; instead the segment descriptor contains a
valid bit for each segment to indicate whether the segment is currently in memory. A trap to o/s (system fault) occurs
when a segment not in memory is requested. The o/s will swap out a segment to secondary storage and bring in the
entire requested segment.

To determine which segment to replace in case of a segment fault the o/s uses another bit in the segment descriptor
called accessed bit.
FILE MANAGEMENT
File system – concerned with managing secondary storage space particularly disk storage. It consists of two distinct
parts:
 A collection of files each storing related data
 A directory structure which organizes and provide information about all the files in the system.
A file – is a named collection of related information that is recorded on secondary storage.

File Naming
A file is named for convenience of its human users and a name is usually a string of characters e.g. “good.c”. When a
file is named it becomes independent of the process, the user and even the system that created it i.e. another user may
edit the same file and specify a different name.

File Types
An o/s should recognize and support different files types so that it can operate and file in reasonable ways.
Files types can be implemented by including it as part of file name. The name is split into 2 parts – a name and
extension usually separated by a period character. The extension indicates the type of file and type of operations that
can be on that file.

File Type Usual extensions


Executable .exe , .com
Source Code .c, .p, .pas, .bas, .vbp, .prg, .java
Text .txt, .rtf, .doc
Graphics .bmp, .gpg, .mtf
Spreadsheet .xls
Archive .zip, .arc.rar (compressed files)

File Attributes
A part from the name other attributes of files include
 Size – amount of information stored in the file
 Location – Is a pointer to a device and to the location of file on the device
 Protection – Access control to information in file (who can read, write, execute and so on)
 Time, Date and user identification –This information is kept for creation, last modification and last use. These
data can be useful for protection, security and usage monitor.
 Volatility – frequency with which additions and deletions are made to a file
 Activity – Refers to the percentage of file’s records accessed during a given period of time.

File Operations
File can be manipulated by operations such as:
 Open – prepare a file to be referenced
 Close – prevent further reference to a file until it is reopened
 Create – Build a new file
 Destroy – Remove a file
 Copy – create another version of file with a new name.
 Rename – change name of a file.
 List – Print or display contents.
 Move – change location of file
Individual data items within the file may be manipulated by operations like:
 Read – Input a data item to a process from a file
 Write – Output a data item from a process to a file
 Update – modify an existing data item in a file
 Insert – Add a new data item to a file
 Delete – Remove a data item to a file
 Truncating – delete some data items but file retains all other attributes

File Structure
Refers to internal organization of the file. File types may indicate structure. Certain files must conform to a required
structure that is understood by the o/s. Some o/s have file systems that does support multiple structure while others
impose (and support) a minimal number of file structures e.g. MS DOS and UNIX. UNIX considers ach file to be a
sequence of 8-bit bytes. Macintosh o/s supports a minimal no of file structure and it expects executables to contain 2
parts – a resource fork and a data fork. Resource fork contains information of importance to user e.g. labels of any
buttons displayed by program.

File Organization
Refers to the manner in which records of a file are arranged on secondary storage.
The most popular schemes are:-
a) Sequential
Records placed in physical order. The next record is the one that physically follows the previous record. It is used for
records in magnetic tape.
b) Direct
Records are directly (randomly) accessed by their physical addresses on a direct Access by storage device (DASD).
The application user places the records on DASD in any order appropriate for a particular application.
c) Indexed Sequential
Records are arranged in logical sequence according to a key contained in each record. The system maintains an index
containing the physical addresses of certain principal records. Indexed sequential records may be accessed sequentially
in key order or they may be accessed directly by a search through the system created index. It is usually used inn disk.

d) Partitioned
It is a file of sequential sub files. Each sequential sub file is called a member. The starting address of each member is
stored in the file directory. Partitioned files are often used to store program libraries.

FILE ALLOCATION METHODS


Deals with how to allocate disk space to different files so that the space is utilized effectively and files can be accessed
quickly. Methods used include:
1. Contiguous Allocation
Files are assigned to contiguous areas of secondary storage. A user specifies in advance the size of area needed to hold
a file to be created. If the desired amount of contiguous space is not available the file cannot be created.
Advantages
i) Speeds up access since successive logical records are normally physically adjacent to one another
ii) File directories are relatively straight forward to implement for each file merely retain the address of start of
file and its length.
Disadvantages
i) Difficult to find space for a new file especially if the disk is fragmented into a number of separate holes (block)
To prevent this, a user can run a repackaging routine (Disk defragmenter)
ii) Another difficulty is determining how much space is needed for a file since if too little space is allocated the
file cannot be extended or cannot grow.
Some o/s use a modified contiguous allocation scheme where a contiguous chunk of space is allocated initially
and then when that amount is not large enough, another chunk of contiguous space, an extent is added.
2. Non Contiguous Allocation
It is used since files do tend to grow or shrink overtime and because users rarely know in advance how large a file will
be. Techniques used include:
a) Linked Allocation
It solves all problems of contiguous allocation. Each file is a linked list of disk blocks, the disk blocks may be scattered
anywhere on the disk. The directory contains a pointer to the first and last disk block of the file. Each Block contains a
pointer to the next block.
Advantage
 Eliminates external fragmentation
Disadvantages
 It is very effective only for sequential access files i.e. where you have to access all blocks.
 Space may be wasted for pointers since they take 4 bytes. The solution is to collect blocks into multiples
clusters rather than blocks.
 Not very reliable since loss or damage of pointer would lead to failure
b) Indexed Allocation
It tries to support efficient direct access which is not possible in linked allocation. Pointers to the blocks are brought
together into one location (index block). Each file has its own index block, which is an array of disk block addresses.
Advantage
 Supports Direct access without suffering from external fragmentation
Disadvantage
 Suffers from wasted space due to pointer overhead of the index block
 The biggest problem is determining how large the index block would be
The mechanisms to deal with this include:-
i) Linked scheme – index block is normally one disk block; to allow large files we may link together
several index blocks.
ii) Multilevel index – use a separate index block to point to the index blocks which point to the files
blocks themselves
iii) Combined scheme – keeps the first say 15 pointers of the index block in the file’s index block.
The first 12 of these pointers point to direct blocks that contain addresses of blocks that contain
data of the file. The next 3 pointers point to indirect blocks which may contain address to blocks
containing data.

FILE MANAGEMENT TECHNIQUES


File Implementation
It is concerned with issues such as file storage and access on the most common secondary storage medium the hard
disk. It explores ways to allocate disk space, to recovers freed space, to track the locations of data and to interface other
parts of the o/s to the secondary storage.
Directory implementation
The selection of directory allocation and directory management algorithms has a large effect on the efficiency,
performance and reliability of the file system.
Algorithms used:
a) Linear is the simplest method of implementing a directory. It uses a linear list of filenames with pointers to the
data blocks. It requires a linear search to find a particular entry.
b) Hash Table – A linear list stores the directory entries but a hash data structure is used. The hash table takes a
value computed from the file name and returns a pointer to the file name in the linear list.

Free Space Management


To keep track of free disk space the system maintains a free space list which records all disk blocks that are free (i.e.
those not allocated to some file or directory). To create a file we search the free space list for the required amount of
space and allocate that space to the new file. This space is then removed from the free space list. Techniques used to
manage disk space include:-
a) Bit vector – Free space list is implemented as a bit map or bit vector. Each block is represented by 1 bit. If the
block is free, the bit is 1; if the block is allocated, the bit 0.
b) Linked List – links together all the free disk blocks, keeping a pointer to the first free block in a special
location on the disk and caching it in memory. This first block contains a pointer to the next free disk block
and so on.
c) Grouping – Is a modification of free list approach and stores the addresses of n free blocks in the first free
block. The first n-1 of these blocks are actually free blocks and so on. The importance of this implementation
is that the addresses of a large number of free blocks can be found quickly unlike in the standard linked list
approach.
d) Counting – it takes advantage of the fact that generally several contiguous blocks may be allocated or freed
simultaneously, particularly when space is allocated with contiguous allocation algorithm or through
clustering. Thus rather than keeping a list of n-free disk addresses we can keep the address of the first free
block and the number n of free contiguous blocks that follow the first block. Although each entry requires
more space that would a simple dist address; the overall list will be shorter as long as the count is generally
greater than 1.
Efficiency and Performance
This is essential when it comes to files and directory implementation since disks tends to be a major bottleneck in
system performance i.e. they are the slowest main computer component. To improve performance disk controllers are
provided to include enough local memory to create on-board cache that may be sufficiently large to store an entire
track at a time.

File Sharing
In a multi user system there is almost a requirement for allowing files to be shared among a number of users. Two
issues arise:-
i) Access rights – should provide a number of options so that the way in which a particular file is accessed can
be controlled. Access rights assigned to a particular file include: Read, execute, append, update, delete.
ii) Simultaneous access – a discipline is required when access is granted to append or update a file to more than
one user. An approach can allow a user to lock the entire file when it is to be updated.
DEVICE MANAGEMENT (INPUT/OUTPUT MANAGEMENT)
Device management encompasses the management of I/O devices such as printers, keyboard, mouse, disk drives, tape
drives, modems etc. The devices normally transfer and receive alphanumeric characters in ASCII which use 7 bits to
code 128 characters i.e. A-Z, a-z, 0-9 and 32 special printable characters such as %, *.

Major differences between I/O devices


a) Manner of operation – some are electromechanical (printer), other electromagnetic (disk) while others are
electronic (RAM).
b) Data transfer rates
c) Data representation – data codes mats
d) Units of transfer – some may use bytes, others blocks.
e) Error conditions - Nature of errors are varied.

The major objective of I/O management is to resolve these differences by supervising and synchronizing all input and
output transfers.

Basic functions of I/O management


1. Keeping track of the status of all devices, which require special mechanisms.
2. Deciding policy to determine who gets a device for how long and when. There are 3 basic techniques for
implementing this policies:
(i) Dedicated – a device is assigned to a single process.
(ii) Shared – A device is shared by many processes.
(iii) Virtual – A technique whereby one physical device is simulated on another physical device (RAM on hard
disk).
3. Allocation – it physically assigns the device to a process.
4. De-allocation of resources for processes.

Modules involved include:


 I/O traffic controller – keeps track of the status of all devices, control units and channels.
 I/O Scheduler – implements the policy algorithms used to allocate devices.
 I/O Device handler – performs the actual dynamic allocation once the I/O scheduler has made the decision.
To enhance I/O management, computer systems include special hardware components between the processor and the
peripherals. They are called interface units.

A part from interface units each device must also have its own controllers that supervises the operations of the
particular mechanism in the peripheral e.g.
- Tape controller (for magnetic tapes)
- Printer controller (that controls the paper motion, print timing e.t.c)
A controller may be housed separately or may be physically integrated with the peripheral.

Connection of I/O bus to input – output devices

Processor Data
Address Bus
Control
Interface Interface Interface

Keyboard Printer Disk


Address bus – carry device address that identifies a device being communicated
Data Bus – carry data to or from a device
Control Bus – transmit I/O Commands
There are four types of commands that are transmitted:
(a) Control command – activates device and informs it what to do e.g. rewind
(b) Status command – used to test conditions in the interface and peripherals e.g. checking for errors.
(c) Data output command – interface caused to transfer data into device.
(d) Data input command – interface receives an item of data from device.

Most computers use the same buses for both memory and input/output transfers. The processor will use different
methods to distinguish the two i.e.
a) Isolated input / output (I/O mapped I/O) – isolates memory from I/O addresses and thus a signal is sent that
distinguishes memory from I/O, enabling one device or memory location.
b) Memory mapped I/O – devices are treated exactly the same as memory locations. Same instructions are used:
Example of I/O interface

Port A I/O Data


Register
Bus
Buffer
Port A I/O Data
Chip Select Register
CS
Register Port A Control
RS1
Select Register
I/O Read RS0
RD Port A Status
I/O Write Register
WR

To CPU To I/O Device


Registers are small memories similar in nature to main memory.

CS RS1 RS0 Register Selected


0 X X
1 0 0 Port A register
1 0 1 Port B register
1 1 0 Control Register (8 bits)
1 1 1 Status Register
Bits in the control Register will determine a variety of operating modes for device.
Bits in Status register indicate status conditions or recording errors that may occur during data transfer.
INPUT – OUTPUT PROCESSOR (IOP)
Instead of having each interface communicate with the processor a computer may incorporate one or more external
processors and assign them the task of communicating directly with all I/O devices.

It takes care of input/ output tasks, relieving the main processor the housekeeping chores involved in I/O transfer. In
addition the IOP can perform other processing tasks such as an arithmetic, logic and code translation.
Block diagram of a computer with I/O Processor

CPU
Peripheral Devices
PD PD PD
Memory
Unit
IOP
I/O Bus

TECHNIQUES FOR PERFORMING I/O OPERATIONS (Modes of transfer)


a) Programmed I/O – Occurs as a result of I/O instructions written in the computer program. Each data transfer is
initiated by an instruction in program. Once it has been initiated the CPU is required to monitor the interface to
see when a transfer can again be made.
Disadvantage: CPU stays in a program loop until the I/O unit indicates that it is ready for data transfer (time
consuming).
b) Interrupt Driven (Interrupt initiated I/O)
Requires interface to issue an interrupt request signal only when the data are available from the device. In the
meantime the CPU can proceed to execute another program. The interface meanwhile keeps monitoring the
device. When the interface determines that device is ready it generates interrupt request to computer, then
momentarily stops the task it is processing and branches to a service program to process the I/O transfer then
returns to original task later.
c) Direct Memory Access - Is a technique of transfer, which removes CPU from control of the transfer of data
from a fast storage device to a memory. This is because at times the transfer is limited by speed of CPU.
Removing the CPU from the path and letting the peripheral device manage the memory buses directly would
improve the speed of transfer. The CPU has no control of memory buses during this operation. A DMA
controller takes over the buses to manage the transfer directly between I/O device and memory.

SOFTWARE CONSIDERATIONS
Computers must have software routines for controlling peripherals and for transfer of data between the processor and
peripherals. I/O routines must issue control commands to activate the peripheral and to check the device status to
determine when it is ready for data transfer.

In interrupt controlled transfers the I/O software must issue commands to the peripheral to interrupt when ready and to
service the interrupt when it occurs. Software control of input –output equipment is a complex undertaking. For this
reason I/O routines for standard peripheral are provided by the manufacturer as part of the computer system (usually
within the O/S) i.e. device drivers, which communicate directly with peripheral devices or their controllers. It is
responsible for starting I/O operations on a device and processing the completion of an I/O request.
DISKS
They include floppy disks, hard disks, disk caches, RAM disks and laser optical disks (DVD, CD-ROMs).On DVDs
and CD ROMs sectors form a long spiral that spins out from the center of the disk. On floppy disks and hard disks
sectors are organized into a no of concentric circles or tracks. As one moves out from the center of the disk, the tracks
get larger,

Disk Hardware
This refers to the disk drive that accesses information in the disk. The actual details of disk I/O operation depend on the
computer system, the operating system and nature of I/O channel and disk controller hardware.

Data is recorded on a series of magnetic disks or platters which are connected by a common spindle that spins at very
high speed (some up to 3600 revolutions per minute).
Movement of read/ write heads

Platter showing:
Tracks and sectors
Boom

Platters Spindle Read – Write Heads


Cylinder – tracks in different platters but in same
position

Track selection involves moving the head in a moveable head system or electronically selecting one head on a fixed
head system.
Hard Disk performance
Three factors affect Access time A – time it takes to access data from a disk.
 Seek Time, S – in a moveable –head system, is the time it takes to position the head at the track.
 Latency Time, L (Rotational Delay/Rotational Latency) – is time it takes for data to rotate from its current position
to a position adjacent to the read write head.
 Transfer Time, T – determined by the amount of information to be read, no of bytes per track and rotational speed.
Therefore: Access Time – Is the sum of seek time, latency time and transfer time.
A = S + L+ T
Disk Scheduling
Involves careful examination of pending requests to determine the most efficient way to service the requests. A disk
scheduler examines the positional relationships among waiting requests. The request queue is then recorded so that the
request will be serviced with minimum mechanical motion.
Common types of scheduling:
1. Seek Optimization
It aims to reduce average time spent on seeks. Algorithms used:
a) FCFS (First Come First Served) – we process items from the queue of requests in a sequential order. There is
no re-ordering of queue.

b) SSFT (Shortest seek Time First) – Disk arm is positioned next at the request (inward or outward) that
minimizes arm movement.

c) SCAN – Disk arm sweeps back and forth across the disk surface servicing all requests in its path. It changes
direction only when there are no more requests to service in the current direction.
d) C-SCAN (Circular SCAN) – Disk arm moves unidirectional across the disk surface toward the inner track.
When there no more request for service ahead of the arm, it jumps back to service the requests nearest the
outer track and proceeds inward again.
e) N-Step SCAN – Disk arm sweeps back and forth as in SCAN but all requests that arrive during a sweep in on
direction are batched and re-ordered for optimal service during the return sweep i.e. segments the disk requests
into sub queues of length N. Sub queues are processed one at a time by using SCAN. While a queue is being
processed, new requests must be added to some other queues.
Explain these other algorithms: LOOK, C-LOOK, F-SCAN

2. Rotational Optimization
It is used in drums. Once the disk arm arrives at a particular cylinder there may be many requests pending on the
various tracks of that cylinder.
Shortest Latency Time First (SLTF) strategy is used and examines all these requests and services the one with the
shortest rotational delay first.
RAM DISKS
This is a disk device simulated into a chip. It completely eliminates delays suffered in conventional disks because of
the mechanical motions inherent in seeks and in spinning a disk. They are much more expensive than regular disks.
Most forms are volatile – lose their contents when power is turned off or when the power supply is interrupted.

CLOCKING SYSTEMS
Clocks and Timers
An interval timer is useful in multi-user systems for preventing one user from monopolizing a processor. After a
designated interval the timer generates pulses again to gain attention of the processor after which it may be assigned to
another user.

A clock is a regular timing signal that governs transitions in a system. It is the heart beat of the computer which is
necessary for timing and sequencing of operations.

VIRTUAL DEVICES
It is a technique where by one physical device is simulated on another physical device. They include:
a) Buffers / Buffering
A buffer is an area of primary storage for holding data during I/O transfers e.g. printer buffer. Techniques for
implementing buffering include:
i) Single buffered Input – The channel deposits data in a buffer, the processor processes that data; the channel
deposits the next data etc. While the channel is depositing data, no processing on that data may occur; while
the data is being processed no additional data may be deposited.
ii) Double Buffering – Allow overlap of input / output operation in one buffer; the processor may be processing
data in the other buffer.

There are various approaches to buffering for different types of I/O devices i.e. those that are block oriented (store
information in blocks that are usually of fixed sizes and transfers a block at a time e.g. tapes) stream oriented (transfer
data in and out as a stream of bytes e.g. terminals, mouse, communication ports).

Buffering aims at avoiding overheads and inefficiencies during I/O operations. Input transfers are performed in
advance of requests being made and also outputs are performed some times after request is made. This information is
kept in a buffer.

b) Spooling
A technique whereby a high speed device is interposed between a running program and a low speed device involved
with the program in input/ output. Instead of writing to a printer, outputs are written to a disk. Programs can run to
completion faster and other programs can be initiated sooner. It is used to improve system throughput by disassociating
a program from slow operating speed of devices such as printer.

c) Caching
Cache memory is usually a memory that is smaller and faster than memory and that is interposed between main
memory and processor. It reduces average memory access times by exploiting the principle of locality.
Disk cache is a buffer in main memory for disks sectors. The cache contains a copy of some of the sectors on the disk
e.g. when using DISKCOPY commands i.e. DISKCOPY A: A:

Assignment
Explain:
a) Graphic device (be sure to mention pixels and resolution)
b) RAID (Redundant array of Inexpensive disks) –main purpose and types.
SECURITY
Built into the operating systems are mechanisms for protecting computing activities. Files, devices and memory must
be protected from improper access& processes must be protected from improper interferences by other processes.

Protection mechanisms implement protection policies. In some cases, the policy is built into the o/s, in others it is
determined by system administrator. Strict protection policies conflict with the needs for information sharing and
convenient access. A reasonable balance must be struck between these competing goals.

A systems security level is the extent to which all its resources are always protected as dictated by system’s protection
policies. Security mechanisms attempt to raise a system’s level of security. However the mechanisms that achieve
higher levels of security tend to make the system less efficient to use. Again a balance must be found between
safeguarding the system and providing convenient access. Major components of security are:
- Authentication
- Prevention
- Detection
- Identification
- Correction

1. AUTHENTICATION
Many authorization policies are based on the identity of user associated with a process. Operating systems need some
mechanism for authenticating a user interacting with the computer system. The most commonly used authentication
techniques is requiring the user to supply one or more pieces of information known only to the user.
Examples include: Passwords, Smartcard systems (use both password and a function), Physical authentication
(fingerprints, eye’s retina).

2. PREVENTION
The most desirable outcome of security system is for it to prevent intruders from successfully penetrating the system’s
security. Preventive measures include:
 Limiting news password to meet a given criteria.(no of characters)
 Requiring passwords to be changed at periodic intervals.
 Encrypting data when transmitting or storing
 Turning off unused or duplicate services(reduces no of system entry points)
 Implementing internal firewall

3. DETECTION
Should a break occur its negative effects may be reduced by prompt detection. Effective detection measures may also
discourage intrusion attempts. Constant monitoring provides the best hope for fast discovery. Detection can be
achieved through:
 Auditing systems (record time and user involved in each login
 Virus checkers
 Existence of long running process in a listing of currently executing process may indicate suspicious activity.
 Current state of system can be checked against a previous state.

4. CORRECTION
After a system has been penetrated it is frequently necessary to take preventive action:
 Periodic backups to allow rolling back
 Re-loading entire system when backup is not available.
 Change all resident security information e.g. passwords
 Vulnerability that led to the penetration should be fixed.

5. IDENTIFICATION
To discourage intruders, it is desirable to identify the source of an attack. Identification is frequently the most difficult
security task.
 Audit trails may provide useful information
 Systems accessed through modems can keep track of the source of incoming calls using call id.
 Networks can record address of the connecting computer
 All services can be configured to require user authentication e.g. mail servers

Exercise
Explain the following program threats:
i. Trapdoor
ii. Logic Bomb
iii. Trojan Horse
iv. Virus
v. Worm
vi. zombies

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy