Content-Length: 3133828 | pFad | https://www.scribd.com/document/819912412/Escape-from-OS

5 Escape from OS | PDF | Process (Computing) | Operating System
0% found this document useful (0 votes)
2 views19 pages

Escape from OS

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 19

INTRODUCTION

An OS is a software layer that sits between user programs and computer hardware, it abstarcts
hardware resources and provides to app programs a clean view of the resources

OS has two main functions:

1.Computer hardware abstraction – the architecture of most computers at the machine lang level is
primitive and complex, so to simplify it, the OS acts as an interface in the middle

2.Computer resource management – Provides an orderly and controlled allocation of the processes,
memories and I/O devices

History of OS:

1.First gen (1945 – 1955) – No OS, prog done in machine lang

2.Second gen (1955 – 1965) – Prog was done in paper and punched into cards before being loaded
into comp with a card reader, OS were introduced in this gen

3.Third Gen (1965 – 1980) – Intro of Integrated circuits, several new ideas were introduced
multiprogramming (ability to run several progs at the same time), spooling (ability to read program
cards into disk directly) and time-sharing (more than one user can access and use the system at the
same time)

4.Fourth gen (1980 – Present) – Intro of large-scale integrated circuits

5.Fifth gen (1990 – Present) – Mobile computing introduced

Types of OS:

1.Batch systems – group jobs with similar reqs into batches for processing

2.Multiuser systems – allows multiple users to access the same OS and share its resources
simultaneously

3.Multitasking systems – allows multiple tasks to be performed simultaneously

4.Network systems – manages network resources and allows devices to communicate and share
resources

5.Distributed systems – connect multiple independent comps through a single comm channel to
work as one powerful machine

6.Real-time systems – manages time-bound tasks by prioritizing tasks based on deadlines

7.Embedded systems – designed to perform a specific task for smart devices eg in vehicles

OS concepts:

1.Kernel – core program that manages hardware and software, and provides basic services for the
rest of the OS
2.Shell – UI to the OS, can be in the form of command line or GUI

3.Process – a program in execution

4.Address space – List of memory locations which a process is allowed to access

Bios takes the kernel and loads it into RAM, and the kernel loads the shell from the disk into the ram

5.File – an abstraction of data stored in disk

6.System call – mechanism used by programs to request services from the OS, it is initiated by
executing a specific instruction, which triggers the switch into kernel mode, the OS then handles the
req, and returns the result to the program

PROCESS MANAGEMENT

-A process is an instance of an executing program; it provides the ability to conduct concurrent


operations even if there is a single CPU

-A CPU runs one process at a time, but in a multiprogramming system, the CPU switches between
processes, running each for a few ms

-OS use processes to manage multiple running progs and create the illusion of parallelism, a process
consists of the PC, registers and variables

-A program (also called job) is a set of ins stored somewhere in the disk, its a passive entity

-When its launched, its now active and a process (also called task) is created

The process addr space is the set of logical addr that a process references in its code, it consists of a
code section, a data section, a stack and a process control block (PCB)

The code section stores the program code, the data (also called heap) consists of global variables, the
stack stores local addr, parameters and return addr

The PCB is used for storing process attributes such as ID and state

The collection of the items in the address space is also called the process image
Process lifecycle:

1.A two state process model – it can also consist of running and not running

The not running state is split into two states:

i.Ready

ii.Blocked (processes waiting for an I/O operation to complete)

2.Five state process model – when a process is first created, it is put into a job queue and given a
new state, its still in secondary storage, the control info regarding the new process is created and
maintained in the memory

When the process in the job queue is scheduled to be brought to the ready queue, the state is now
ready through the admit event

When this process is then selected by the scheduling mechanism and dispatched, it becomes a
running process, it runs unless its interrupted or finishes its execution, this is through the dispatch
event, the interrupt event can change its state to ready, this can be due to expiry of a time slot (eg in
round robin) or a higher priority process
Alternatively, it can reach an ins which requires an I/O device or other event, then it will go to the
blocked state, it will go into the blocked queue, when the I/O access or the other event access is
completed, the process goes back to the ready queue

A process which executes completely becomes a terminated process through the exit event, other
reasons for terminating incl unrecoverable erros or being aborted by a user/other process, the
process will release all the resources allcoated to it

3.Seven state process model – in the situation when there is not enough memory to bring in new
processes, space can be created by swapping out some blocked processes to disk (now called
suspended processes, which will be in suspended queue)

Structure of PCB:
Some key attributes include Process ID, PC, Registers, State, Priority, Event info

Process tables:

-Store the id of every process and a corresponding pointer to its pcb

Queues are implemented using linked list data structures, the main queues are:

Ready queue, blocked, suspended, and free-process queue (for the info of empty space in the
memory where a new pcb can be created)

Each pcb has a pointer that points to the next PCB, when the state of a process is changed its PCB is
moved to the appropriate queue

The running process header can only have one pcb as cpu can only run one process at a time

After changing the state of a running process due to interrupt or I/O wait, the pcb of the process is
saved, the saving of the status of the running process in its pcb is called context saving

The loading of a new process in the CPU after a previous process is stopped is called context/process
switching
Types of interrupts:

1.System calls

2.Exceptional conditions – eg arithmetic exceptionns

3.I/O Completion

4.External – for eg expiration of timer clock in round robin

Steps of process switching:

1.The context of the current running process is saved into stack, this inc PC, PSW and registers

2.The PC is loaded with the address of the ISR (Interrupt service routine) that can handle the
interrupt

3.System control is transferred to the ISR

4.The ISR changes its preceding process status and saves its context from the stack to the pcb

5.The ISR processes the interrupt by invoking the response-handling function

6.The OS schedules a process to be executed next

7.The OS dispatches the schedules process into the CPU

Schedulers:

1.Long term scheduler – selects a job from the job pool to the ready queue, does not happen very
freq bcz a process waits for some time in the job pool before getting a slot in the ready queue, they
are common in batch processing and absent in time-sharing systems

2.Short term scheduler – selects a process from the ready queue for dispatching the CPU, whenever
there is an interrept, this scheduler is invoked to another process for execution

3.Medium term scheduler – available in models with the suspended state, used for moving from
blockd to blocked suspended (or vice versa), ready to ready suspended (or vice versa)

Steps for dispatching operation:

1.Scheduler select the process

2.Process code is loaded into memory, data and stack are initialized

3.PCB is located and the saved fields (if the process has run before) are loaded in the cpu’s registers
and initialized

4.The CPU executes the process


Steps for blocking operation:

1.PCB is saved in the blocked queue

2.Event info field in the PCB stores the I/O device or the resource for which the blocked process is
waiting

3.When a resource or an I/O device is released in the system, the OS scans the event info field of all
blocked processes in the blocked queue

4.If a process matches, it is sent to the ready queue

Steps for termination operation: A process terminates when

1.Executes the last line of its code

2.User terminates the process

3.User logs off the system

4.Process encounters unrecoverable error

When a process reaches the execution of its last statement, a system call is executed to tell the OS
that the process is terminating, the OS releases the resources held by the process

PROCESS SCHEDULING

The following events can trigger process scheduling:

1.When an executing process finishes its execution

2.When a process needs to wait for an I/O or resource

3.When an I/O or resource being used by any process is released

4.When a process finishes its allocated time slice

5.When an executing process creates a child process

6.When another process comes with higher priority

7.If there is an error or exception in the process or hardware

8.When there is need to suspend some blocked process and make space for a new process

Types of scheduling:

1.Non-pre-emptive – a process is allowed to execute to completion, the system cannot take away the
process from the processor until it exits, the process can voluntarily release the processor, for
example, when waiting for I/O

2.Pre-emptive – a running process can be interrupted in between, it takes place in the following
situations
a) When a new process with higher priority arrives

b) When a resource or I/O being waited upon is released

c) A timer interrupt occurs

Pre-emptive is used to implement multi-user time-sharing systems, where more than one user
process needs computing resources

A timer clock is implemented to send interrupts to running processes after a fixed period of time,
also used to implement real time systems, where high priority processes need to be assigned a
processor immediately

Advantages of pre-emptive:

-Better process management in multiprogramming and multiuser environmen

Disadvantages of pre-emptive:

-High cost/complexity in implementation due to addition of timer clock

-Reduced CPU performance due to time wasted in the frequent context switching

Design requirements: Scheduling algorithms must satisfy the following:

1.Turnaround time – time between arrival and termination, it can be found by waiting time +
execution time

2.Waiting time – should have a minimum waiting time

3.Response time – time between arrival and the first response given by the process to the user

4.Throughput – No of processes/time, ideally should be a high value

5.CPU Utilization - % of time that the CPU is busy in executing the processes, ideally should be busy
all the time

6.Fairness – eg lower priority process should not be ignored

7.Balance – All resources should be used in balance

Scheduling algorithms:

1.First come first serve – it is non-pre-emptive


Advantages:

-Simple to implement

Disadvantages:

-With no pre-emption, it can’t be used in multi-user or real-time systems

-Shorter process suffer longer waiting times due to long processes

Priority scheduling: Its a pre-emptive algorithm, lower number = Higher priority

Disadvantage: Low priority process may be starved


3.Shortest process next (SPN) – Process with shortest exection time is executed first, it is NON-PRE-
EMPTIVE

Diadvantage: Long processes may be starved


4.Shortest remaining time next (SRN) – It is a pre-emptive version of SPN, the criteria for pre-
emption is based on the REMAINING TIME of the processes
5.Round Robin Scheduling – each arriving process gets same amount of time, this means neither
short nor long processes can be starved, the fixed time period is known as a time slice or time
quantam, a process will be interrupted after a certain time (time slice) if it has not completed
execution
Optimum values of the time quantam have to be selected, if its too large, the response time of
processes will increase, if its too small, conext switch time will increase due to the freq process
switching

The rule of thumb for selecting quantam time:

-80% of the CPU bursts should be smaller than the time quantam

-Context switch time should be 10% or less of time quantam


4.Highest response ratio next scheduling (HRRN) -The ratio is given by:

Process time elapsed in the system/CPU time consumed by the process

A high ratio indicates that a process has received less service and needs to get access to the CPU
immediately, the system schedules a process with the higher response ratio, it distributes the
processor time more fairly when compared to the simple round robin algorithm

5.Multi level queue scheduling – Processes can be categorized into:

-Interactive processes

-Non-interactive

-CPU-Bound

-I/O Bound

-Foreground

-Background

Instead of a single ready queue storing all the processes, there are multiple queues with diff lvls

Each queue stores processes of the same category, queues are assigned diff priorities (eg background
queues will have lower priority than others)

PROCESS SYNCHNORIZATION

Processes can be independent (doesnt affect any other process) or cooperative (processes cooperate
with each other, they share data and communicate), the concurrent access of resources by these
cooperative process may cause the problems if not managed well

The procedure involved in preserving the appropriate order of execution of cooperative process is
known as process synchronization

Data access synchronization:


When more than one processes accesses and updates the same data concurrently, the result
depends on the sequence of executions of the ins, this situation is known as race condition which
can lead to data inconsistency, and hence wrong results

Data access synchronization is req to prevent processes from updating global data concurrently

Control synchronization:

-Consider two interacting processes where one process p2 depends on the output of another process
p1, there should be control over the processes such that p2 is forced to wait until the execution of p1
has been finished, this is known as control synchronization

Resource access synchronization:

-Lack of control over competing processes for accessing the same resources can lead to a severe prob
called deadlock

Critical section is a region of a program that tries to access shared resources

Synchronization algorithms: All of them should satisfy the following requirements:

1.Mutual exclusion – only one process must execute at a time

2.Progress – If one process doesnt need to execute into a CS, then it should not stop another process
frm entering it

3.Bounded waiting – a process must not wait endlessly to get into the CS
The following are the main algorithms:

1.Lock variable

The mutual exclusion can fail if pre-emption occurs (eg after p1 executes the while) and p2 comes
and says lock = 1 and then p1 comes back in

2.Turn variable
Progress can fail if one process has kept turn = j, but j doesnt want to enter, and i wants to enter
again, he can’t enter until j enters

Interested variable:

Bounded waiting can fail if both get pre-empted (eventually they can get stuck in the while loop)

Peterson solution:
Semaphore:

Used to protect resources such as global shared memory, its implemented as an integer variable S

The semaphore is accessed by only two indivisible operations, wait/p/down and signal/v/up

Initially, the count of semaphore is 1, whenever a process tries to enter the CS, it performs the wait
operation, another process trying to access the CS will not be allowed to enter unless the semaphore
becomes > 0, when a process exits the CS. It performs the signal op

Message passing:

-Used to communicate between two systems, u cant really have a way to share memory so u need a
logical link

Types of addressing:

1.Direct – The send and recieve commands are used to allow communication, the two processes
need to name each other to comm, this becomes easy if they have the same parent
2.Indirect addressing – each process has a mailbox that it uses for recieving msgs

You might also like









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.scribd.com/document/819912412/Escape-from-OS

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy