0% found this document useful (0 votes)
68 views

Operating Systems Notes

The document provides an overview of operating system concepts including its functions like booting, memory management, loading and execution of programs, data security, disk management, process management, device controlling and providing user interfaces. It also describes different computer system architectures like single processor systems, multiprocessor systems and clustered systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

Operating Systems Notes

The document provides an overview of operating system concepts including its functions like booting, memory management, loading and execution of programs, data security, disk management, process management, device controlling and providing user interfaces. It also describes different computer system architectures like single processor systems, multiprocessor systems and clustered systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 135

OPERATINGSYSTEMS

LECTURENOTES

MCA I YEAR – II SEM


(2022-2023)

DEPARTMENTOFCOMPUTER APPLICATIONS

HINDUSTHANCOLLEGEOFENGINEERING&TECHNOLOGY
Programme Course Code Name of the Course L T P C
MCA 21CA2292 OPERATING SYSTEMS 3 0 0 -

1. To introduce the Operating system concepts and designs to provide the skills
required to implement the OS services.
2. To Describe the concepts of process synchronization, threads and deadlocks
COURSE 3. To describe threads and deadlocks.
OBJECTIVE 4. To describe the concepts of Memory management with respect to Physical and
Virtual Memory
5. To Understand File Management, I/O Devices and various Disk Scheduling
Strategies

Instructional
Unit Description
hours
OS INTRODUCTION AND PROCESS MANAGEMENT AND
SCHEDULING ALGORITHMS
Introduction: Concept of Operating Systems (OS), Generations of OS,
Types of OS, OS Services, System Calls, Operating System Structure
Processes: Definition, Process Relationship, Different states of a Process,
Process State transitions, Process Control Block (PCB), Context
I switching. 9
Process Scheduling: Basic concepts of scheduling, Types of Schedulers,
Scheduling criteria: CPU utilization, Throughput, Turnaround Time,
Waiting Time, Response Time.
Scheduling algorithms: Pre-emptive and non-pre-emptive, FCFS, SJF,
RR;
PROCESS SYNCHRONIZATION,
Inter-process Communication: Concurrent processes, precedence
graphs, Critical Section, Race Conditions, Mutual Exclusion, Hardware
Solution, Semaphores, Strict Alternation, Peterson’s Solution, The
Producer / Consumer Problem, Event Counters, Monitors, Message
II Passing, Classical IPC Problems: Reader’s & Writer Problem, Dinning 9
Philosopher Problem
Concurrent Programming: Critical region, conditional critical region,
monitors, concurrent languages, communicating sequential process
(CSP); Deadlocks - prevention, avoidance, detection and recovery. .
THREADS AND DEADLOCKS
Thread: Definition, Various states, Benefits of threads, Types of threads,
Concept of multithreads.
III 9
Deadlocks: Definition, Necessary and sufficient conditions for Deadlock,
Deadlock Prevention, Deadlock Avoidance: Banker’s algorithm,
Deadlock detection and Recovery
IV MEMORY MANAGEMENT 9
Memory Management: Basic concept, Logical and Physical address
maps, Memory allocation: Contiguous Memory allocation – Fixed and
variable partition– Internal and External fragmentation and Compaction.
Virtual Memory: Basics of Virtual Memory – Hardware and control
structures – Locality of reference, Page allocation, Partitioning, Paging,
Page fault, Working Set, Segmentation, Demand paging, Page
Replacement algorithms: Optimal, First in First Out (FIFO), Second
Chance (SC), Not recently used (NRU) and Least Recently used (LRU).
FILE SYSTEMS MANAGEMENT, I/O AND DISK
MANAGEMENT
File Management: Concept of File, Access methods, File types, File
operation, Directory structure, File System structure, Allocation methods
(contiguous, linked, indexed), Free-space management (bit vector, linked
V list, grouping), 9
I/O Hardware: I/O devices, Device controllers, Direct Memory Access,
Principles of I/O.
Disk Management: Disk structure, Disk scheduling - FCFS, SSTF,
SCAN, C-SCAN, Disk reliability, Disk formatting, Boot-block, Bad
blocks.
Total Instructional hours 45

CO1: Describe the various OS functionalities, structures Process Management


and Scheduling Algorithms
CO2: Apply and explore the communication between inter process and
synchronization techniques.
COURSE
CO3. Understand Threads and Deadlock.
OUTCOME
CO4: Implement memory placement strategies, replacement algorithms related to
main and virtual memory techniques
CO5: Differentiate the file systems for applying various file allocation and access
techniques, I/O and Disk Scheduling Strategies

REFERENCES BOOKS:
R1. Silberschatz, Peter B. Galvin, Greg Gagne-Operating System Concepts, Wiley, 10th Edition, 2019.
R2. Tanenbaum, Andrew S., and Albert S. Woodhull. Operating systems: design and implementation. Vol. 68.
Englewood Cliffs: Prentice Hall, 1997.
R3. Remzi H. Arpaci-Dusseau, Andrea C. Arpaci-Dusseau, Operating Systems, Three Easy Pieces, Arpaci-
Dusseau Books, Inc (2015).
R4. Dhamdhere, Dhananjay M. Operating systems: a concept-based approach, 2E. Tata McGraw-Hill
Education, 2006.
R5. Deitel, Harvey M., Paul J. Deitel, and David R. Choffnes. Operating systems. Delhi. Pearson Education:
Dorling Kindersley, 2004.
UNIT-I
Operatingsystemperformsthefollowingfunctions:

1. Booting
Booting is a process of starting the computer operating system starts the computer to
work.Itchecks the computerandmakesitreadytowork.
2. MemoryManagement
It is also an important function of operating system. The memory cannot be
managedwithout operating system. Different programs and data execute in memory at one
time. ifthere is no operating system, the programs may mix with each other. The system
will notworkproperly.
3. Loadingand Execution
A program is loaded in the memory before it can be executed. Operating system
providesthe facilitytoloadprogramsinmemoryeasilyandthenexecuteit.
4. Datasecurity
Dataisanimportantpart ofcomputersystem.Theoperatingsystemprotectsthedatastored onthe
computerfromillegaluse,modificationordeletion.
5. DiskManagement
Operatingsystemmanagesthediskspace. Itmanagesthestoredfilesandfoldersinaproperway.
6. ProcessManagement
CPUcanperformone taskatone
time.iftherearemanytasks,operatingsystemdecideswhichtaskshouldgettheCPU.
7. DeviceControlling
operatingsystemalsocontrols alldevices attachedtocomputer. Thehardwaredevicesare
controlledwiththe helpofsmallsoftware calleddevicedrivers..
8. Providinginterface
It is used in order that user interface acts with a computer mutually. User interface
controlshow
youinputdataandinstructionandhowinformationisdisplayedonscreen.Theoperatingsystemoffers
twotypesoftheinterfacetotheuser:
1. Graphical-lineinterface:Itinteractswithofvisualenvironmenttocommunicatewith
the computer. It uses windows, icons, menus and other graphical objects to
issuescommands.
2. Command-line interface:it provides an interface to communicate with the computer
bytypingcommands.
ComputerSystemArchitecture
Computersystemcanbedividedintofour componentsHardware–providesbasic
computingresources
CPU,memory,I/Odevices,Operating system
Controlsandcoordinatesuseofhardwareamongvariousapplicationsandusers
Applicationprograms–
definethewaysinwhichthesystemresourcesareusedtosolvethecomputingproblemsoftheusers
Wordprocessors,compilers,webbrowsers,databasesystems,videogamesUse
rs
People,machines,othercomputersFourCo
mponentsofaComputerSystem

Computer architecture means construction/design of a computer. A computer system may


beorganized in different ways. Some computer systems have single processor and others
havemultiprocessors. So based on the processors used in computer systems, they are
categorizedintothefollowingsystems.

1. Single-processorsystem

2. Multiprocessorsystem

3. ClusteredSystems:

1. Single-ProcessorSystems:

Some computers use only one processor such as microcomputers (or personal computers
PCs).On a single-processor system, there is only one CPU that performs all the activities in
thecomputer system. However, most of these systems have other special purpose processors,
suchas I/O processors that move data quickly among different components of the computers.
Theseprocessorsexecuteonlyalimited systemprogramsand donot runtheuserprogram.Sometimes
theyaremanagedbytheoperatingsystem.Similarly,PCscontainaspecialpurposemicroprocessorinth
ekeyboard,whichconvertsthekeystrokesintocomputercodestobesenttothe CPU. The use of
special purpose microprocessors is common in microcomputer. But it doesnot mean that this
system is multiprocessor. A system thathas only one general-purpose CPU,isconsideredas
single-processorsystem.

2. MultiprocessorSystems:

In multiprocessor system, two or more processors work together. In this system, multiple
programs(more than one program) are executed on differentprocessors at the same time. This type
ofprocessing is known as multiprocessing. Some operating systems have features of
multiprocessing.UNIX is an example of multiprocessing operating system. Some versions of
Microsoft Windowsalsosupportmultiprocessing.

Multiprocessorsystemisalsoknownasparallelsystem.Mostlytheprocessorsofmultiprocessor
system share the common system bus, clock, memory and peripheral
devices.Thissystemisveryfastindataprocessing.

TypesofMultiprocessorSystems:

Themultiprocessorsystemsarefurtherdividedintotwotypes;
(i).Asymmetricmultiprocessingsystem
(ii).Symmetricmultiprocessingsystem

(i) AsymmetricMultiprocessingSystem(AMS):

The multiprocessing system, in which each processor is assigned a specific task, is known
asAsymmetric Multiprocessing System. For example, one processor is dedicated for
handlinguser's requests, one processor is dedicated for running application program, and one
processoris dedicated for running image processing and so on. In this system, one processor
works asmasterprocessor,while otherprocessors work as slave processors.The
masterprocessorcontrols the operations of system. It also schedules and distributes tasks
among the slaveprocessors.Theslave processors performthe predefinedtasks.

(ii) SymmetricMultiprocessingSystem(SMP):

The multiprocessing system, in which multiple processors work together on the same task,
isknownasSymmetricMultiprocessingSystem.Inthissystem,eachprocessorcanperformalltypesofta
sks.Allprocessorsaretreatedequallyandnomaster-slaverelationshipexistsbetweentheprocessors.
For example, different processors in the system can communicate with each other. Similarly,
anI/O can be processed on any processor. However, I/O must be controlled to ensure that the
datareaches the appropriate processor. Because all the processors share the same memory, so
theinput data given to the processors and their results must be separately controlled. Today
allmodernoperating systemsincludingWindowsandLinuxprovidesupport forSMP.

Itmustbenotedthatinthesamecomputersystem,theasymmetricmultiprocessingandsymmetricm
ultiprocessingtechniquecanbeusedthroughdifferentoperatingsystems.

ADual-CoreDesign

3. ClusteredSystems:

Clustered system is another form of multiprocessor system. This system also contains
multipleprocessors but it differs from multiprocessor system. The clustered system consists of
two ormore individual systems that are coupled together. In clustered system, individual
systems (orclustered computers) share the same storage and are linked together ,via Local Area
Network(LAN).

A layerof clustersoftware runs on the cluster nodes.Each node can monitor one or more ofthe
other nodes over the LAN. If the monitored machine fails due to some technical fault
(orduetootherreason),themonitoringmachinecantakeownershipofitsstorage.Themonitoring
machine can also restart the applications that were running on the failed machine.The users
oftheapplications seeonlyaninterruptionofservice.

TypesofClusteredSystems:

Likemultiprocessorsystems,clusteredsystem can alsobeof


twotypes(i).Asymmetric ClusteredSystem
(ii).SymmetricClusteredSystem
(i). AsymmetricClusteredSystem:
Inasymmetricclusteredsystem,onemachineisinhot-standbymodewhiletheother
machine is running the application. The hot-standby host machine does nothing. It
onlymonitors the active server. If the server fails, the hot-standby machine becomes the
activeserver.
(ii). SymmetricClusteredSystem:
In symmetric clustered system, multiple hosts (machines) run the applications. They
alsomonitor each other. This mode is more efficient than asymmetric system, because it uses
allthe available hardware. This mode is used only if more than one application be available
torun.

Operating System –

StructureOperatingSystem

Structure
Multiprogrammingneededforefficiency
SingleusercannotkeepCPUandI/Odevicesbusy
atalltimesMultiprogrammingorganizesjobs(codeanddata)soCPUalwayshasoneto
Execute Asubsetoftotaljobsinsystemiskeptinmemory
2)Multitasking
Operating-systemOperations

1) Dual-ModeOperation·
In order to ensure the proper execution of the operating system, we must be able to
distinguishbetween the execution of operating-system code and user defined code. The
approach taken bymost computer systems is to provide hardware support that allows us to
differentiate amongvariousmodes ofexecution.

Attheveryleastweneedtwoseparatemodesofoperation.usermodeandkernelmode.
A bit, called the mode bit is added to the hardware of the computer to indicate the current
mode:kernel (0) or user (1).with the mode bitwe are able to distinguish between a task that
isexecuted on behalf ofthe operatingsystemandonethatisexecuted on behalfoftheuser,When

the computer system is executing on behalf of a user application, the system is in user
mode.However, when a user application requests a service from the operating system (via a..
systemcall),itmusttransitionfromusertokernelmode tofulfillthe request.

At system boot time, the hardware starts in kernel mode. The operating system is then
loadedand starts user applications in user mode. Whenever a trap or interrupt occurs, the
hardwareswitches from user mode to kernel mode (that is, changes the state of the mode bit to
0). Thus,whenever the operating system gains control of the computer, it is in kernel mode. The
systemalways switches to user mode (by setting the mode bit to 1) before passing control to a
userprogram.
The dual mode of operation provides us with themeansforprotecting the operating
systemfromerrantusers-
anderrantusersfromoneanother.Weaccomplishthisprotectionbydesignating some of the machine
instructions thatmay cause harm as privileged instructions.the hardware allows privileged
instructions to be executed only in kernel mode. If an attempt ismade to execute a privileged
instruction in usermode, the hardware does not execute theinstruction but rather treats it as
illegal and traps it to the operating system. The instruction toswitch to kernel mode is an
example of a privileged instruction. Some other examples
includeI/0controltimermanagementandinterruptmanagement.
OPERATINGSYSTEMSNOTES IIYEAR/ISEM MRCET
Personal-ComputerSystems(PCs)
Apersonalcomputer(PC)isasmall,relativelyinexpensivecomputerdesignedforanindividual user.
In price, personal computers range anywhere from a few hundred dollars
tothousandsofdollars.Allarebasedonthemicroprocessortechnologythatenablesmanufacturerstopu
tanentireCPUononechip.
Athome,the most popular use for personal computers is for playing games.Businessesuse
personal computersforword processing,accounting,desktop
publishing,andforrunningspreadsheetanddatabasemanagementapplications.
Specialpurposesystems

a) Real-TimeEmbeddedSystems
Thesedevicesarefoundeverywhere,fromcarenginesandmanufacturingrobotstoDVDsandmicro
wave ovens.Theytendtohaveveryspecific tasks.
Theyhavelittleornouserinterface,preferringtospendtheirtimemonitoringandmanaginghardwar
e devices,suchasautomobile enginesandrobotic arms.
b) MultimediaSystems
Most operating systems are designed to handle conventional data such as text files,
programs,word-processing documents, and spreadsheets. However, a recent trend in
technology is theincorporation of multimedia data into computersystems.Multimedia data
consist of audioand video files as well as conventional files. These data differ from
conventional data in thatmultimedia data-such as frames of video-must be delivered
(streamed) according to certaintime restrictions (for example, 30 frames per second).
Multimedia describes a wide range ofapplicationsin popular use today.
Theseincludeaudiofiles such as MP3, DVDmovies,videoconferencing, and shortvideoclips of
movie previews or news stories downloadedover the Internet. Multimedia applications may
also include live webcasts (broadcasting overthe WorldWideWeb)

c) HandheldSystems
Handheld Systems include personal digital assistants (PDAs, cellular telephones. Developers
ofhandheld systems and applications face many challenges, most of which are due to the
limitedsize of such devices. For example, a PDA is typically about 5 inches in height and 3
inches inwidth, anditweighs less than one-half pound. Because of their size,mosthandheld
deviceshave smallamountsofmemory,slow processors,andsmalldisplayscreens.
OperatingSystemServices

One set of operating-system services provides functions that are helpful to the
userCommunications–Processesmayexchangeinformation,onthesamecomputeror betweencomputers
overanetworkCommunications maybeviasharedmemoryorthroughmessagepassing(packetsmovedbytheOS)
Error detection – OS needs to be constantly aware of possible errors May occur in the CPU
andmemoryhardware,inI/Odevices,inuserprogramForeachtypeoferror, OS
shouldtaketheappropriateactiontoensure correct
andconsistentcomputingDebuggingfacilitiescangreatlyenhancethe
user’sandprogrammer’sabilitiestoefficientlyusethesystem
Another set of OS functions exists for ensuring the efficient operation of the system itself via
resourceSharing
Resource allocation - When multiple users or multiple jobs running concurrently, resources
mustbeallocatedtoeachofthem
Manytypesofresources-
Some(suchasCPUcycles,mainmemory,andfilestorage)mayhavespecialallocationcode,others(suchasI/
Odevices)mayhavegeneralrequestandreleasecode
Accounting-Tokeeptrackofwhichusersusehow muchandwhatkindsofcomputerresources
Protection and security - The owners of information stored in a multiuser or networked
computersystem may want to control use of that information, concurrent processes should not interfere
with eachother
Protection involvesensuringthatallaccesstosystemresources iscontrolled
Security of the system from outsiders requires user authentication, extends to defending external
I/Odevicesfrominvalidaccess attempts
Ifasystemistobeprotectedandsecure,precautionsmustbeinstitutedthroughoutit.Achainisonlyasstrongasits
weakestlink.
User OperatingSystemInterface-CLI
Command Line Interface (CLI) or command interpreter allows direct command
entrySometimesimplementedinkernel,sometimesbysystemsprogram
sometimes multiple flavors implemented –
shellsPrimarilyfetchesacommandfromuserandexecutes
it
User Operating SystemInterface-GUI

User-
friendlydesktopmetaphorinterfaceUsuallymouse,
keyboard,andmonitorIconsrepresentfiles,program
s,actions,etc
Various mouse buttons over objects in the interface cause various actions (provide
information,options,executefunction,opendirectory(knownasafolder)
InventedatXeroxPARC
ManysystemsnowincludebothCLIandGUIinterfacesM
icrosoftWindowsisGUIwithCLI“command” shell
Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and
shellsavailableSolarisisCLIwithoptionalGUIinterfaces(JavaDesktop,KDE)
SystemCalls

ProgramminginterfacetotheservicesprovidedbytheOSTypically
writtenina high-levellanguage(CorC++)
Mostly accessed by programs via a high-level Application Program Interface (API) rather
thandirect system call usenThree most common APIs are Win32 API for Windows, POSIX API for
POSIX-based systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API
for theJava virtualmachine(JVM)
WhyuseAPIsratherthansystemcalls?
ExampleofSystemCalls

ExampleofStandardAPI
ConsidertheReadFile()functioninthe
Win32API—afunctionforreadingfromafile

A description of the parameters passed to ReadFile() HANDLE file—the file to be


readLPVOID buffer—abufferwhere the data willbe readintoandwritten
from DWORD bytesToRead—the number of bytes to be read into
thebuffer LPDWORD bytesRead—the number of bytes read during
thelast read LPOVERLAPPED ovl—indicates if overlapped I/O is
beingused
SystemCallImplementation
Typically,anumber associatedwitheachsystemcall
System-callinterfacemaintainsatableindexedaccordingtotheseNumbers
The system call interface invokes intended system call in OS kernel and returns status of the
systemcallandanyreturnvalues
ThecallerneedknownothingabouthowthesystemcallisimplementedJu
stneedstoobeyAPIandunderstandwhatOSwill
doasaresultcallMostdetailsofOSinterfacehiddenfromprogrammerbyAPI
Managedbyrun-timesupportlibrary(setoffunctionsbuiltintolibrariesincludedwithcompiler)API–
SystemCall–OSRelationship
StandardCLibraryExample

SystemCallParameterPassing
Often,moreinformationisrequiredthansimplyidentityofdesiredsystemcallExactt
ype
andamountofinformationvaryaccordingtoOSandcallThreegeneralmethodsusedt
opassparameterstothe
OSSimplest:passtheparametersinregisters
Insomecases,maybemoreparametersthanregisters
Parametersstoredinablock,ortable,inmemory,andaddressofblockpassedasaparameterinaregister
ThisapproachtakenbyLinuxandSolaris
Parametersplaced,orpushed,ontothestackbytheprogramandpoppedoffthestackbytheoperatingsystem
Blockandstackmethodsdonotlimitthenumberorlengthofparametersbeingpassed
ParameterPassingviaTable

TypesofSystemCalls
1. Processcontrol
2. Filemanagement
3. Devicemanagement
4. Informationmaintenance
5. Communications
Processcontrol
Arunningneedstohaltitsexecutioneithernormallyorabnormally.
Ifasystemcallismadetoterminatetherunningprogram,adumpofmemoryissometimestakenandanerr
ormessage generatedwhichcanbe diagnosedbyadebugger
oend,abort
oload, execute
ocreateprocess,terminateprocess
ogetprocessattributes,setprocessattributes
owaitfortime
owaitevent, signalevent
oallocateandfreememory
Filemanagement
OSprovidesanAPItomakethesesystemcalls
formanagingfilesocreatefile,deletefile
oopen,close file
oread,write,reposition
ogetandsetfileattributes
Devicemanagement
Process requires several resources to execute, if these resources are available, they will
begranted and control retuned to user process. Some are physical such as video card and
othersuchasfile.Userprogramrequestthe device andrelease whenfinished
orequestdevice,releasedevice
oread,write,reposition
ogetdeviceattributes, setdeviceattributes
o logicallyattachordetachdevices
Informationmaintenance
Systemcallsexistpurelyfortransferringinformationbetweentheuserprogram and
OS. It can return information about the system, such as the number of current
users,theversionnumber oftheoperatingsystem,theamountoffreememoryor diskspaceandsoon.
o get timeordate,settimeordate
o getsystemdata,setsystemdata
o getandsetprocess,file, ordevice attributes

Communications
Twocommonmodelsofcommunication
Message-passingmodel,informationisexchangedthroughaninterprocess-
communicationfacilityprovidedbythe OS.
Shared-
memorymodel,processesusemapmemorysystemcallstogainaccesstoregionsofmemoryownedbyoth
erprocesses.
ocreate, deletecommunicationconnection
osend,receivemessages
otransferstatusinformation
o attachanddetachremotedevices

ExamplesofWindowsand UnixSystemCalls
MS-DOSexecution

(a) Atsystemstartup

(b)runningaprogramFreeBSDRunningMultiplePro
grams
SystemPrograms
System programs provide a convenient environment for program development and execution. The
canbe dividedinto:
File
manipulationStat
us
informationFilem
odification
Programminglanguagesupport
Program loading and
executionCommunications
Applicationprograms

Mostusers’viewoftheoperationsystemisdefinedbysystemprograms,nottheactualsystemcalls
provideaconvenientenvironmentforprogramdevelopmentandexecution
Someofthemaresimplyuserinterfacesto systemcalls;othersareconsiderablymore complex
Filemanagement-Create,delete,copy,rename,print,dump,list,andgenerallymanipulatefilesanddirectories
 Statusinformation
Someaskthesystemforinfo-date,time,amountofavailablememory,diskspace,numberofusersOthersprovide
detailed performance,logging,anddebugginginformation
Typically,theseprogramsformatandprinttheoutputtotheterminalorotheroutputdevicesSomesystem
simplementaregistry-usedto storeand retrieveconfiguration information
 Filemodification
Texteditorstocreateand modifyfiles
SpecialcommandstosearchcontentsoffilesorperformtransformationsofthetextProgramming-
language support - Compilers, assemblers, debuggers and interpreters sometimesprovided
Program loading and execution- Absolute loaders, relocatable loaders, linkage editors, and overlay-
loaders,debuggingsystemsforhigher-levelandmachinelanguage
Communications-Providethemechanismforcreatingvirtualconnectionsamongprocesses,users,
andcomputersystems
Allowuserstosendmessagestooneanother’sscreens,browsewebpages,sendelectronic-
mailmessages,loginremotely,transferfilesfromonemachinetoanother

OperatingSystemDesignandImplementation
Design and Implementation of OS not “solvable”, but some approaches have proven
successfulInternalstructureofdifferentOperatingSystems canvarywidely
Start by defining goals and specifications Affected
bychoice of hardware, type of system User goals
andSystem goals
Usergoals–operatingsystemshouldbeconvenienttouse,easytolearn,reliable,safe, andfast
Systemgoals–operatingsystemshouldbeeasytodesign,implement,andmaintain,aswellasflexible,reliable,error-
free,andefficient
Importantprincipletoseparate
Policy:Whatwillbedone?
Mechanism: Howtodoit?
Mechanismsdeterminehowtodosomething,policiesdecidewhatwillbedone
Theseparationofpolicyfrommechanismisaveryimportantprinciple,itallowsmaximumflexibilityifpolicydecisio
ns aretobechangedlater
SimpleStructure
MS-DOS–writtentoprovidethemostfunctionalityintheleastspaceNotdividedintomodules
AlthoughMS-DOShassomestructure,itsinterfacesandlevelsofFunctionalityarenotwellseparated
MS-DOSLayerStructure

Theoperatingsystemisdividedintoanumber
oflayers(levels),eachbuiltontopoflowerlayers.Thebottomlayer(layer0),isthehardware;thehighest(layerN)isthe
userinterface.
Withmodularity,layersareselectedsuchthateachusesfunctions(operations)andservicesofonlylower-
levellayers
TraditionalUNIX SystemStructure

UNIX

UNIX–limitedbyhardwarefunctionality,theoriginalUNIXoperatingsystemhadlimitedstructuring.
TheUNIXOSconsistsoftwoseparablepartsSyst
emsprograms
Thekernel
Consistsofeverythingbelowthesystem-
callinterfaceandabovethephysicalhardwareProvidesthefilesystem,CPUscheduling,memorymanag
ement,andotheroperating-system
functions; a large number of functions for one
levelLayeredOperatingSystem

MicrokernelSystemStructure
Movesasmuch fromthekernelinto“user”space
CommunicationtakesplacebetweenusermodulesusingmessagepassingBenefits:
Easiertoextendamicrokernel
EasiertoporttheoperatingsystemtonewarchitecturesMorereliable(lesscodeisrunningi
nkernelmode)
MoresecureDetrimen
ts:
Performanceoverhead ofuser spaceto
kernelspacecommunicationMacOSXStructure
Modules

Most modern operating systems implement kernel


modulesUsesobject-orientedapproach
Eachcorecomponentisseparate
EachtalkstotheothersoverknowninterfacesEachisloa
dableasneededwithinthekernelOverall,
similartolayersbutwithmoreflexible

SolarisModularApproach

VirtualMachines
Avirtualmachinetakesthelayeredapproachtoitslogicalconclusion.Ittreatshardwareandtheoperatingsystemke
rnelas thoughtheywereallhardware
Avirtualmachineprovidesaninterfaceidenticaltotheunderlyingbarehardware
Theoperatingsystemhostcreatestheillusionthataprocesshasitsownprocessorand(virtualmemory)Eachguestpro
videdwitha (virtual)copyofunderlyingcomputer
VirtualMachinesHistoryandBenefits
FirstappearedcommerciallyinIBMmainframesin1972
Fundamentally,multipleexecutionenvironments(different
operatingsystems)cansharethesamehardwareProtectfromeachother
Somesharingoffilecanbepermitted, controlled
Commutatewitheachother,otherphysicalsystemsvianetworkingUseful
fordevelopment,testing
Consolidationofmanylow-resourceusesystemsontofewerbusiersystems
“OpenVirtualMachineFormat”,standardformatofvirtualmachines,allowsaVMtorunwithinmanydifferentvirtua
lmachine(host)platforms
Para-virtualization
PresentsguestwithsystemsimilarbutnotidenticaltohardwareGuestmust
bemodifiedtorunonparvirtualized hardware
GuestcanbeanOS,orinthecaseofSolaris10applicationsrunningincontainersSolaris10withTwo
Containers
VMwareArchitecture

The JavaVirtualMachine

Operating-SystemDebugging

Debugging is finding and fixing errors, or


bugsgeneratelogfilescontainingerrorinformation
Failure of an application can generate core dump file capturing memory of the
processOperating system failure can generate crash dump file containing kernel memory
Beyondcrashes,performancetuningcanoptimizesystemperformance
Kernighan’sLaw:“Debuggingistwiceashardaswritingthecodeintherstplace.Therefore,ifyouwritethe
codeascleverlyaspossible,youare,bydefinition,not smartenoughtodebugit.”
DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production
systemsProbes firewhencodeisexecuted, capturingstatedataandsendingittoconsumersofthoseprobes
Process
Aprocessisa programat thetime ofexecution.
DifferencesbetweenProcessandProgram

Process Program
Processisadynamicobject Programisastaticobject

Process is sequence of instruction Programisasequenceofinstructions


execution
Processloadedintomainmemory Programloadedintosecondarystorage
devices
Timespanofprocessislimited Timespanofprogramisunlimited
Processisaactiveentity Programisa passive entity

ProcessStates

Whenaprocessexecuted,itchangesthestate,generallythestateofprocessisdeterminedbythecurrentact
ivityofthe process.Eachprocessmaybeinone ofthefollowingstates:
1. New :Theprocess isbeingcreated.
2. Running :Theprocess isbeingexecuted.
3. Waiting :Theprocess iswaitingforsomeeventtooccur.
4. Ready :Theprocess iswaitingtobeassignedtoaprocessor.
5. Terminated:TheProcesshasfinishedexecution.
Onlyoneprocesscanberunninginanyprocessoratanytime,Butmanyprocessmaybeinreadyandwaitin
gstates.The readyprocessesareloadedintoa“readyqueue”.
Diagramofprocessstate
a) New ->Ready :

OScreatesprocessandpreparestheprocesstobeexecuted,thenOSmovedthe
processintoreadyqueue.
b) Ready->Running :OSselects oneofthe
JobsfromreadyQueueandmovethemfromreadytoRunning.

c) Running-
>Terminated:WhentheExecutionofaprocesshasCompleted,OSterminatesthatprocessfromrun
ningstate.SometimesOSterminatestheprocessforsomeotherreasonsincludingTimeexceeded,m
emoryunavailable,accessviolation,protectionError,I/Ofailureandsoon.
d) Running->Ready: When the time slot of the processor expired (or) If
theprocessorreceivedanyinterruptsignal,theOSshifted Running->ReadyState.

e) Running -> Waiting : A processis putinto thewaitingstate,if theprocess


needaneventoccur(or)anI/ODevicerequire.
f) Waiting->Ready :Aprocessinthewaitingstateismovedtoreadystate
whentheeventforwhichithasbeenCompleted.
ProcessControlBlock:

Eachprocessisrepresented intheoperating SystembyaProcessControlBlock.

ItisalsocalledTaskControlBlock.ItcontainsmanypiecesofinformationassociatedwithaspecificProcess.

ProcessState

ProgramCounter

CPURegisters

CPUSchedulingInformation

Memory–ManagementInformation

AccountingInformation

I/OStatusInformation

ProcessControlBlock
1. ProcessState :TheStatemaybe new,ready,running,andwaiting,Terminated…
2. ProgramCounter :indicatestheAddressofthenextInstructiontobeexecuted.
3. CPUregisters :registersincludeaccumulators,stackpointers,General
purposeRegisters….
4. CPU-SchedulingInfo: includes a process pointer, pointers
toschedulingQueues,otherschedulingparametersetc.
5. MemorymanagementInfo:includespagetables,segmentationtables,valueofbase
andlimitregisters.
6. AccountingInformation:includesamountofCPUused, timelimits,Jobs(or)Processnumbers.
7. I/OStatusInformation:IncludesthelistofI/ODevicesAllocatedtotheprocesses,listofopenfiles.

Threads:

A process is divide into number of light weight process, each light weight process is said to
bea Thread. The Thread has a program counter (Keeps track of which instruction to
executenext),registers(holdsitscurrentworkingvariables),stack (executionHistory).
ThreadStates:

1. bornState : Athreadisjustcreated.
2. readystate :ThethreadiswaitingforCPU.
3. running :Systemassignstheprocessortothethread.
4. sleep :Asleepingthreadbecomesreadyafterthedesignatedsleeptimeexpires.
5. dead :TheExecution ofthethreadfinished.

Eg:Wordprocessor.
Typing,Formatting,Spellcheck,savingarethreads.
DifferencesbetweenProcessandThread

Process Thread
Processtakesmoretimeto create. Thread takeslesstimeto create.
ittakesmoretimetocompleteexecution& Lesstimeto terminate.
terminate.
Execution isveryslow. Executionisveryfast.
It takes more time to switch b/w two Ittakeslesstimetoswitchb/wtwo
processes. threads.
Communicationb/wtwoprocessesisdifficult . Communicationb/w twothreads is
easy.
Processcan’tsharethesamememoryarea. Threadscansharesamememoryarea.
Systemcallsarerequestedtocommunicate Systemcallsarenot required.
eachother.
Processislooselycoupled. Threadsaretightlycoupled.
Itrequiresmoreresourcestoexecute. Requiresfewresourcestoexecute.
Multithreading

AprocessisdividedintonumberofsmallertaskseachtaskiscalledaThread.NumberofThreadswithina
Processexecute atatimeiscalledMultithreading.
If aprogram,ismultithreaded,evenwhensomeportionofitisblocked,thewholeprogramisnot
blocked.The restofthe programcontinuesworking Ifmultiple CPU’sareavailable.
Multithreadinggivesbestperformance.Ifwehaveonlyasinglethread,numberofCPU’savailable,
Noperformance benefits achieved.
 Processcreationisheavy-weight whilethreadcreationislight-weight
Cansimplifycode,increaseefficiency

Kernelsaregenerallymultithreaded
CODE-Containsinstruction
DATA-holdsglobalvariableFILES-
openingandclosingfiles
REGISTER-
containinformationaboutCPUstateSTACK-
parameters, local variables, functionsTypesOf

PROCESSSCHEDULING:

CPUisalways busyinMultiprogramming. BecauseCPUswitches fromonejobtoanotherjob. Butin


simplecomputersCPUsitidleuntiltheI/Orequestgranted.
schedulingisaimportantOS function.All resourcesare scheduledbeforeuse.
(cpu,memory,devices…..)
Advantages
Process scheduling is an essentialrequireKernelmodeprivileges.
 Threadswitchingdoesnot part of a Multiprogramming operating systems.
 Userlevelthreadcanrunonanyoperatingsystem.
Suchoperating systems allow more than one process to be loaded into the executable
 Schedulingcanbeapplicationspecificintheuserlevelthread.
memory atatime andtheloadedprocesssharesthe CPU usingtimemultiplexing
 Userlevelthreads arefasttocreateand manage.
. SchedulingObjectives
Maximizethroughput.
Maximizenumberofusersreceivingacceptableresponsetimes.Be
predictable.
Balanceresourceuse.
Avoidindefinitepostponement.
Enforce Priorities.
Givepreferencetoprocesses holdingkeyresources

SCHEDULING QUEUES: people live in rooms. Process are present in rooms


knowsasqueues.Thereare3types
1. job queue: when processes enter the system, they are put into a job queue,
whichconsists all processes in the system. Processes in the job queue reside on mass storage and
awaitthe allocationofmainmemory.
2. readyqueue:ifaprocessispresentinmainmemoryandisreadytobeallocatedtocpuforexecut
ion,is keptinreadyqueue.
3. devicequeue:ifaprocessispresentinwaitingstate(or)waitingforani/oeventtocompleteis
saidtobeindevicequeue.(or)
TheprocesseswaitingforaparticularI/Odeviceis calleddevicequeue.

Schedulers:Thereare3schedulers

1. Longtermscheduler.
2. Mediumtermscheduler
3. Short termscheduler.

Schedulerduties:

 Maintainsthequeue.
 SelecttheprocessfromqueuesassigntoCPU.
Typesofschedulers

1. Longtermscheduler:
selectthejobsfromthejobpoolandloadedthesejobsintomainmemory(readyqueue).Longtermsch
edulerisalsocalledjobscheduler.
2. Shorttermscheduler:
select theprocessfromreadyqueue,andallocatesitto thecpu.
IfaprocessrequiresanI/Odevice,whichisnotpresentavailablethenprocessentersdevicequeue.
shorttermschedulermaintainsreadyqueue, devicequeue. Alsocalledascpuscheduler.
3. Medium term scheduler: if process request an I/O device in the middle of
theexecution, then the process removed from the main memory and loaded into the waiting
queue.When the I/O operation completed, then the job moved from waiting queue to ready
queue.These twooperations performedbymediumtermscheduler.

Context Switch: Assume, main memory contains more than one process. If cpu is executing a process,
iftime expires or if a high priority process enters into main memory, then the scheduler saves
informationabout current process in the PCB and switches to execute the another process. The concept of
moving CPUbyschedulerfromoneprocesstootherprocess isknownascontextswitch.
Non-Preemptive Scheduling: CPU is assigned to one process, CPU do not release until the competition
ofthatprocess.TheCPUwillassigned tosomeotherprocessonlyafterthepreviousprocesshasfinished.
Preemptivescheduling:hereCPUcanreleasetheprocesseseveninthemiddleoftheexecution. CPU
received a signal from process p2. OS compares the priorities of p1 ,p2.
Ifp1>p2,CPUcontinuestheexecutionofp1.Ifp1<p2 CPU preemptp1 andassigned top2.
Dispatcher: The main job of dispatcher is switching the cpu from one process to
anotherprocess.Dispatcherconnectsthe cputothe processselectedbythe shorttermscheduler.
Dispatcher latency: The time it takes by the dispatcher to stop one process and start
anotherprocessisknownasdispatcherlatency. Ifthedispatcherlatencyis
increasing,thenthedegreeofmultiprogrammingdecreases.
SCHEDULINGCRITERIA:

1. Throughput:howmanyjobsarecompletedbythecpuwithinatimeperiod.
2. Turnaroundtime:Thetimeintervalbetweenthesubmissionoftheprocessandtime
ofthecompletionis turnaroundtime.
TAT=Waitingtimeinreadyqueue+executingtime+waitingtimeinwaitingqueueforI/O.
3. Waitingtime:Thetimespentbytheprocesstowaitfor cputobeallocated.
4. Responsetime:Timedurationbetweenthesubmissionandfirstresponse.
5. CpuUtilization:CPUiscostlydevice,itmustbekeptasbusyaspossible.Eg:
CPUefficiency is90%meansitisbusyfor90 units,10 unitsidle.
CPUSCHEDULINGALGORITHMS:

1. First come First served scheduling: (FCFS): The process that request the
CPUfirst is holds the cpu first. If a process request the cpu then it is loaded into the ready
queue,connectCPUtothatprocess.
Consider the following set of processes that arrive at time 0, the length of the cpu burst
timegiveninmilliseconds.
bursttimeisthe time,requiredthecputoexecute thatjob,itisinmilliseconds.

Process Bursttime(millisecon
ds)
P1 5
P2 24
P3 16
P4 10
P5 3
Averageturnaroundtime:

Turnaroundtime=waitingtime+bursttime

Turn around time for p1=


0+5=5.Turnaroundtimeforp2=5+
24=29Turnaroundtimeforp3=29+
16=45Turnaroundtimeforp4=45+
10=55Turnaroundtimeforp5=
55+3=58
Averageturnaroundtime=(5+29++45+55+58/5)=187/5 =37.5millisecounds

Averagewaiting time:

waitingtime=startingtime-arrivaltime
Waitingtimeforp1=0
Waitingtimeforp2=5-
0=5Waitingtimeforp3=29-
0=29Waitingtimeforp4=45-
0=45Waiting timeforp5=55-0=55
Averagewaitingtime=0+5+29+45+55/5=125/5=25 ms.

AverageResponseTime:

Formula:FirstResponse-
ArrivalTimeResponse
TimeforP1=0Response
TimeforP2=> 5-0= 5Response Time
for P3 => 29-0 = 29Response Time
for P4 => 45-0 =
45ResponseTimeforP5=> 55-0=55
AverageResponseTime=>(0+5+29+45+55)/5=>25ms
1) FirstComeFirstServe:

ItisNonPrimitiveSchedulingAlgorithm.

PROCESS BURST ARRIVAL


TIME TIME
P1 3 0

P2 6 2

P3 4 4

P4 5 6

P5 2 8

Process arrived in the order P1, P2, P3, P4,


P5.P1arrivedat0ms.
P2 arrived at 2
ms.P3 arrived at 4
ms.P4 arrived at 6
ms.P5arrivedat8ms.

AverageTurnAroundTime
Formula:TurnaroundTime=:waitingtime+bursttimeTurnAr
oundTimeforP1=>0+3=3
Turn Around Time for P2 => 1+6 =
7Turn Around Time for P3 => 5+4 =
9Turn Around Time for P4 => 7+ 5=
12TurnAroundTimefor P5=>2+10=12
AverageTurnAroundTime=>( 3+7+9+12+12)/5=>43/5=8.50ms.
AverageResponseTime:
Formula: ResponseTime=FirstResponse -
ArrivalTimeResponse TimeofP1=0
Response Time of P2 => 3-2 =
1Response Time of P3 => 9-4 =
5Response Time of P4 => 13-6 =
7Response Time ofP5=> 18-8=10
AverageResponseTime=>( 0+1+5+7+10 )/5=>23/5=4.6ms
Advantages:EasytoImplement,Simple.
Disadvantage:Averagewaitingtimeisveryhigh.
2) Shortest JobFirst Scheduling(SJF):

WhichprocesshavingthesmallestCPUbursttime,CPUisassignedtothatprocess.Iftwoprocesshav
ingthe sameCPU bursttime,FCFSisused.

PROCESS CPUBURSTTIME

P1 5

P2 24

P3 16

P4 10

P5 3

P5havingtheleastCPUbursttime(3ms).CPUassignedtothat(P5).AftercompletionofP5shorttermsch
edulersearchfornest(P1).......
AverageWaiting Time :

Formula=StaringTime-
ArrivalTimewaitingTimeforP1=>3-0=3
waiting Time for P2 => 34-0 =
34waiting Time for P3 => 18-0 =
18waiting TimeforP4=>8-
0=8waitingtimeforP5=0
Averagewaitingtime=>(3+34+18+8+0)/5=>63/5=12.6ms

AverageTurnAroundTime:

Formula=waitingTime+burstTime

Turn Around Time for P1 => 3+5


=8Turn Around for P2 => 34+24
=58TurnAroundforP3 =>18+16=34
TurnAroundTimeforP4=>8+10=18Tur
nAround TimeforP5 =>0+3 =3
AverageTurnaround time =>(8+58+34+18+3 )/5=> 121/5 =24.2ms
AverageResponseTime:

Formula:FirstResponse-ArrivalTime

FirstResponsetimefor P1=>3-0=3
First Response time for P2 => 34-0 =
34First Response time for P3 => 18-0 =
18FirstResponsetimeforP4=>8-0=8
FirstResponsetimeforP5=0
AverageResponseTime=>(3+34+18+8+0)/5=>63/5=12.6msSJFis
Nonprimitiveschedulingalgorithm
Advantages : Least average waiting
timeLeastaverageturnaroundtimeLeastave
rage responsetime
Averagewaitingtime( FCFS) =25ms
Averagewaitingtime(SJF)=12.6ms50%timesavedinSJF.
Disadvantages:
 Knowingthelength ofthenextCPUburst timeisdifficult.
 Aging(BigJobsarewaiting forlongtime for CPU)

3) ShortestRemainingTimeFirst(SRTF);

Thisisprimitiveschedulingalgorithm.

Short term scheduler always chooses the process that has term shortest remaining time. When
anew processjoins the ready queue , shortterm schedulercompare the remaining time
ofexecuting process and new process. If the new process has the leastCPU burst time,
Theschedulerselectsthat joband connect toCPU.Otherwise continuetheoldprocess.

PROCESS BURSTTIME ARRIVALTIME

P1 3 0
P2 6 2
P3 4 4
P4 5 6
P5 2 8
P1 arrives at time 0, P1 executing First , P2 arrives at time 2. Compare P1 remaining time and P2 ( 3-2
=1)and6. So,continueP1afterP1,executingP2, attime4, P3arrives, compareP2remainingtime(6-1=5
) and 4 ( 4<5 ) .So, executing P3 at time 6, P4 arrives. Compare P3 remaining time and P4 ( 4-
2=2 ) and 5 (2<5 ). So, continue P3 ,after P3, ready queue consisting P5 is theleast out
ofthree.SoexecuteP5,nextP2,P4.
FORMULA:Finishtime-
ArrivalTime Finish Time for P1 =>
3-0 = 3Finish TimeforP2=>15-
2=13FinishTimeforP3=>8-4=4
FinishTimeforP4=>20-
6=14FinishTimeforP5=>10-8=2

AverageTurnaroundtime=>36/5 =7.2ms.

4)ROUNDROBINSCHEDULINGALGORITHM:

It is designed especially for time sharing systems. Here CPU switches between the
processes.When the time quantum expired, the CPU switched to another job. A small unit of
time, calleda time quantum or time slice. A time quantum is generally from 10 to 100 ms. The
timequantum is generally depending on OS. Here ready queue is a circular queue. CPU
schedulerpicks the first process from ready queue, sets timer to interrupt after one time
quantum anddispatchestheprocess.

PROCESS BURSTTIME

P1 30

P2 6

P3 8
AVERAGEWAITING TIME:

Waitingtimefor P1 =>0+(15-5)+(24-20) =>0+10+4 =14


Waitingtimefor P2 =>5+(20-10)=>5+10=15
WaitingtimeforP3=>10+(21-
15)=>10+6=16Averagewaitingtime=>
(14+15+16)/3=15ms.

AVERAGETURNAROUNDTIME:
FORMULA:Turnaroundtime=waitingtime+burstTimeTurna
roundtimeforP1=>14+30=44
Turn around time for P2 => 15+6 =
21TurnaroundtimeforP3=> 16+8= 24
Averageturnaroundtime =>(44+21+24)/3=29.66 ms

5) PRIORITYSCHEDULING:

PROCESS BURST PRIORITY


TIME
P1 6 2

P2 12 4

P3 1 5

P4 3 1

P5 4 3

P4hasthehighestpriority. AllocatetheCPUtoprocessP4firstnextP1,P5,P2,P3.

AVERAGEWAITING TIME:

WaitingtimeforP1=>3-
0=3WaitingtimeforP2=> 13-
0=13WaitingtimeforP3=> 25-
0=25WaitingtimeforP4=>0
Waitingtimefor P5=>9-0=9

Averagewaiting time=>(3+13+25+0+9)/5 =10ms


AVERAGETURNAROUNDTIME:

Turn around time for P1 =>3+6 =


9Turn around time for P2 => 13+12=
25Turn around time for P3 => 25+1 =
26TurnaroundtimeforP4=>0+3=
3TurnaroundtimeforP5=> 9+4= 13

AverageTurnaroundtime=>(9+25+26+3+13 )/5=15.2ms

Disadvantage:Starvation

Starvationmeansonlyhighpriorityprocessareexecuting,butlowpriorityprocessare
waitingforthe CPU forthelongestperiodofthetime.

Multiple–processorscheduling:
When multiple processes are available, then the scheduling gets more
complicated,becausethereismorethanoneCPUwhichmustbekeptbusyandineffectiveu
seatalltimes.
Load sharing resolves around balancing the load between multiple
processors.Multi
processorsystemsmaybeheterogeneous(ItcontainsdifferentkindsofCPU’s)
(or)Homogeneous(allthe samekindofCPU).
1) Approachesto multiple-processor
schedulinga)Asymmetricmultiprocessing
Oneprocessoristhemaster,controllingallactivitiesandrunningallkernelcode,while
theotherruns onlyusercode.
b)Symmetricmultiprocessing:
Eachprocessorschedulesitsownjob.Eachprocessormayhaveitsownprivatequeueofreadyprocesses.

2) ProcessorAffinity
Successive memory accesses by the process are often satisfied in cache
memory.what happens if the process migrates to another processor. the contents of
cachememory must be invalidated for the first processor, cache for the second
processormustberepopulated.MostSymmetricmultiprocessorsystemstrytoavoidmigra
tion of processes from one processor to another processor, keep a
processrunningonthe sameprocessor.Thisiscalledprocessoraffinity.
a) Softaffinity:
Softaffinity occurs when thesystem attemptstokeepprocesses on
thesameprocessorbutmakesnoguarantees.
b) Hardaffinity:
Processspecifiesthatitisnottobemovedbetweenprocessors.
3) Loadbalancing:
Oneprocessorwontbesittingidlewhileanotherisoverloaded.Balancing
canbeachived throughpush migration orpullmigration.

Pushmigration:
Pushmigrationinvolvesaseparateprocessthatrunsperiodically(e.gevery200ms)andmoves
processesfromheavilyloadedprocessorsontolessloadedprocessors.
Pullmigration:
Pullmigrationinvolvesidleprocessorstakingprocessesfromthereadyqueuesoftheotherprocessors.

Realtime scheduling:
Real time scheduling is generally used in the case of multimedia operating
systems.Here multiple processes compete for the CPU. How to schedule processes
A,B,C sothateach onemeetsits deadlines.The generaltendencyis tomake them pre-
emptable, so that a process in danger of missing its deadline can preempt
anotherprocess. When this process sends its frame,the preempted process can
continuefrom where it had left off. Here throughput is not so significant. Important
is thattasksstartandendaspertheirdeadlines.
RATEMONOTONIC(RM)SCHEDULINGALGORITHM
Rate monotonic scheduling Algorithm works on the principle of preemption. Preemption
occurson a given processor when higher priority task blocked lower priority task from
execution. Thisblocking occurs due to priority level of different tasks in a given task set.rate
monotonic is apreemptive algorithm which means if a task with shorter period comes during
execution it willgain a higher priority and can block or preemptive currently running tasks. In
RM priorities areassigned according to time period. Priority of a task is inversely proportional
to its timer period.Task with lowest time period has highest priority and the task with highest
period will havelowestpriority.
For example,wehaveatasksetthatconsistsofthreetasksasfollows

Tasks Executiontime(Ci) Timeperiod(Ti)

T1 0.5 3

T2 1 4

T3 2 6
Table1. Taskset
U= 0.5/3+1/4+2/6=0.167+0.25+0.333=0.75
Asprocessorutilizationislessthan1or100%sotasksetisschedulableanditalsosatisfiestheaboveequationofratem
onotonicschedulingalgorithm.

Figure1.RMschedulingofTask setintable1.
Atasksetgivenintable1itRMschedulingisgiveninfigure1. Theexplanationofaboveisas follows
1. According to RM scheduling algorithm task with shorter period has higher priority so T1
hashigh priority, T2 has intermediate priority and T3 has lowest priority. At t=0 all the tasks
arereleased.NowT1hashighestprioritysoitexecutesfirsttillt=0.5.
2. At t=0.5 task T2 has higher priority than T3 so it executes first for one-time units till t=1.5.
Afterits completion only one task is remained in the system that is T3, so it starts its execution
andexecutestillt=3.
3. At t=3 T1 releases, as it has higher priority than T3 so it preempts or blocks T3 and starts
itexecutiontillt=3.5.AfterthattheremainingpartofT3executes.
4. At t=4 T2 releases and completes it execution as there is no task running in the system at
thistime.
5. At t=6 both T1 and T3 are released at the same time but T1 has higher priority due to
shorterperiodso itpreemptsT3 and executestillt=6.5,afterthat T3 startsrunning and executestillt=8.
6. Att=8T2withhigher prioritythanT3releasessoit preemptsT3andstartsitsexecution.
7. At t=9 T1 is released again and it preempts T3 and executes first and at t=9.5 T3 executes
itsremainingpart.Similarly,the executiongoes on.

EarliestDeadlineFirst(EDF)SchedulerAlgorithm
The EDF is a dynamic algorithm, Job priorities are re-evaluated at every decision point, this re-
evaluationisbasedonrelativedeadlineofajobortask,thecloser tothedeadline,thehigher thepriority.The
EDFhas thefollowingadvantages:
1. Veryflexible(arrivaltimesanddeadlinesdonotneedtobeknownbeforeimplementation).
2. Moderatecomplexity.
3. Abletohandleaperiodicjobs.
TheEDFhasthefollowingdisadvantages:
1. Optimallyrequirespre-emptivejobs.
2. Notoptimalonseveralprocessors.
3. Difficulttoverify.
Example
Consider
thefollowingtasksetinTable1.PrepresentsthePeriod,etheExecutiontimeandDstandsfortheDeadline.
Assumethatthejobprioritiesarere-evaluatedatthereleaseanddeadlineofajob.

P e D

T1 2 0.5 2

T2 4 1 4

T3 5 1.5 5

Solution

Mark alldeadlinesrelated to allthe tasks


 Firstmark alldeadlinesrelated tothetasksasshownin Fig.1.T1,T2and T3
arerepresentedwithRed,Greenand Blue colourrespectively.Thescheduleisfrom0–20msasshown.
 AtT =0,T1hastheclosestdeadline, soscheduleT1.

AtT=0.5,T1iscompleted,itsnextreleasetimeisat2ms.T2isclosertoitsdeadlinesoT2isschedulednextandex
ecutesfor1s.
 AtT=1.5,T2jobiscompleted.
T3isnextbecauseitisclosertoitsdeadlinewhileT2hasnotbeenreleased.
 AtT =2,anewinstanceofT1isreleased,therefore, T3is interruptedandhas1ms lefttocomplete
execution.T1executes
 AtT=2.5,TheonlyreadyjobisT3whichisscheduleduntilcompletion.
 AtT=4,anewinstanceofT1isreleasedwhichexecutes for 0.5ms.
 AtT =4.5,T1is nowcompleted, soT2isnowthetaskclosesttoitsdeadlineandisscheduled.
 AtT=5.5,T3isscheduledbutispre-emptedatT=6sorunsfor0.5ms
 AtT=6,anewinstanceofT1isreleasedandthereforescheduled.
 AtT =6.5,T3isclosesttoitsdeadlinebecauseT1andT3havenotbeenreleased.
SoT3isallowedtocompleteits executionwhichis1ms.
 AtT =8,anewinstanceofT1isreleasedandisscheduled.
 AtT=8.5,T2isthetaskhavingtheclosestdeadlineandsoisscheduledtorunfor itsexecutiontime.
 AtT=10,thenextreleaseofT1isscheduled.
 AtT
=10.5,thenextjobwiththeclosestdeadlineisT3becausethenextT2jobwillbereleasedatT=12.SoT3is
scheduleduntilcompletion.
 AtT=12,thenextreleaseofT1isscheduled.
 AtT =12.5,T2isscheduledas itisthejobwiththeclosestdeadline.
 AtT=14,thenextreleaseofT1isscheduled.
 AtT =15,thenextreleaseofT3isscheduledbecauseitisnowthejobwiththeclosestdeadlinebecause the
nextrelease ofT1andT2isat16ms.T3runs for1ms.
 AtT=16,T3ispre=emptedbecauseanewreleaseofT1whichhastheclosestdeadlineisnowavailable.
 T =16.5,T2isthejobwiththeclosestdeadline,
soitisscheduledforthedurationofitsexecutiontime.
 AtT
=17.5,sinceT1andT2havecompleted,T3resumesexecutiontocompleteitstaskwhichranforonly1msthelastt
ime.T3completes executionatT=18.
 AtT=18,anewinstanceofT1isreleasedand scheduledtorunfor itsentireexecutiontime.
 AtT =18.5, nojobisreleasedyetbecauseanewreleaseofT1, T2andT3areat20ms.
 Fig.2showsthe EDFschedulefromT= 0toT=20.
 .
UNIT-II

InterProcesscommunication:

Process synchronization refers to the idea that multiple processes are to join up
orhandshake at a certain point, in order to reach an agreement or commit to a
certainsequence of action. Coordination of simultaneous processes to complete a
task isknownas process synchronization.
Thecriticalsectionproblem
Consider a system , assume that it consisting of n processes. Each process having
asegmentofcode.This segmentofcodeissaidtobecriticalsection.
E.G:RailwayReservationSystem.
Two persons from different stations want to reserve their tickets, the train
number,destination is common, the two persons try to get the reservation at the same
time.Unfortunately,the availableberthsare onlyone;botharetryingforthatberth.
It is also called the critical section problem. Solution is when one process
isexecuting in its critical section, no other process is to be allowed to execute
initscriticalsection.

The critical section problem is to design a protocol that the processes can use
tocooperate. Each process must request permission to enter its critical section.
Thesection of code implementing this request is the entry section. The critical
sectionmaybefollowed byanexitsection.Theremainingcodeisthe remaindersection.

Asolutiontothecriticalsectionproblemmustsatisfythefollowing3requirements:1
.mutualexclusion:
Onlyone processcanexecutetheircriticalsectionatanytime.
2. Progress:
Whennoprocessisexecutingacriticalsectionforadata,oneoftheprocesseswishingtoenterac
riticalsectionfordatawillbegrantedentry.
3. Bounded wait:
Noprocessshouldwaitforaresourceforinfiniteamountoftime.

Criticalsection:
Theportioninanyprogramthataccessesasharedresourceiscalledascriticalsection(or)criticalregion.

Peterson’ssolution:
Peterson solution is one of the solutions to critical section problem involving
twoprocesses. This solution states that when one process is executing its critical
sectionthenthe otherprocess executesthe restofthe codeandviceversa.
Petersonsolutionrequirestwoshareddataitems:
1) turn: indicates whose turn it is to
enterintothecritical
section.Ifturn==i,thenprocessiisallowedintothei
rcriticalsection.
2) flag:indicateswhenaprocesswantstoenterintocriticalsection.when

process iwants toentertheircriticalsection,itsetsflag[i]totrue.


do {flag[i] = TRUE; turn =
j;while(flag[j]
&&turn==j);criticalsection
flag[i] =
FALSE;remainde
rsection
} while(TRUE);

Synchronizationhardware
In a uniprocessor multiprogrammed system, mutual exclusion can be obtained
bydisabling the interrupts before the process enters its critical section and
enablingthemafterithasexitedthecriticalsection.

Disableinterrupts
CriticalsectionEn
ableinterrupts

Once a process is in critical section it cannot be interrupted. This


solutioncannotbeusedinmultiprocessorenvironment.sinceprocessesrunindepend
entlyondifferentprocessors.
Inmultiprocessorsystems,Testandsetinstructionisprovided,itcompletesexecution
without interruption. Each process when entering their critical
sectionmustsetlock,topreventotherprocessesfromenteringtheircriticalsectionssimult
aneouslyandmustreleasethelockwhenexitingtheircriticalsections.

do{acquir
elockcritic
alsectionr
eleaselock
remainder
section
} while(TRUE);

A process wants to enter critical section and value of lock is false then
testandsetreturnsfalse and the value of lock becomes true.thus for other processes
wantingto enter their critical sections testandset returns true and the processes do
busywaiting untiltheprocessexitscriticalsectionand setsthevalue oflock tofalse.
• Definition:
booleanTestAndSet(boolean&lock)
{booleantemp=lock;
Lock=true;r
eturntemp;
}
AlgorithmforTestAndSet
do{
whiletestandset(&lock)
//donothing
//
criticalsectionlo
ck=false
remaindersection
}while(TRUE);

SwapinstructioncanalsobeusedformutualexclusionDefinition
Voidswap(boolean&a,boolean&b)
{
booleantemp=a;
a=b;
b=temp;
}
Algorithm
do
{
key=true;while(k
ey=true)swap(loc
k,key);critical
sectionlock=false
;remaindersection
}while(1);

lock is global variable initialized to false.each process has a local variable key.
Aprocess wants toenter critical section,since the value of lockis false and key
istrue.

lock=false
key=true
afterswapinstruction,
lock=true
key=false

nowkey=false becomestrue,processexitsrepeat-until,andenterinto
criticalsection.When process is in critical section (lock=true),so other processes
wanting to entercriticalsectionwillhave
lock=true
key=true
Hencetheywilldobusywaitinginrepeat-
untilloopuntiltheprocessexitscriticalsectionandsetsthevalue oflocktofalse.
Semaphores
Asemaphoreisaninteger variable.semaphoreaccessesonlythroughtwooperations.
1) wait:waitoperationdecrementsthecountby1.
Iftheresultvalueisnegative,theprocessexecutingthewaitoperationisblocked.
2) signaloperation:
Signaloperationincrementsby1,ifthevalueisnotpositivethenoneoftheprocessblocked
inwaitoperationunblocked.

wait(S){
whileS<=0; //no-
op
S--;
}
signal(S)
{

S++;
}

In binarysemaphorecountcan be0or1.Thevalueofsemaphoreisinitializedto1.

do{
wait(mutex);
//
CriticalSectionsi
gnal(mutex);
//remaindersection

}while(TRUE);
Firstprocessthatexecuteswaitoperationwillbeimmediatelygrantedsem.countto0.Ifsomeo
therprocesswantscriticalsectionandexecuteswait()thenitisblocked,sincevaluebecomes-
1.If theprocessexitscriticalsectionitexecutessignal().sem.countisincrementedby
1.blockedprocessisremovedfromqueueandaddedtoreadyqueue.

Problems:
1) Deadlock
Deadlockoccurswhenmultipleprocessesareblocked.eachwaitingforaresourcethatcanonl
ybefreedbyone oftheotherblockedprocesses.
2) Starvation
one or more processes gets blocked forever and never get a chance to take
theirturninthecriticalsection.
3) Priorityinversion
If low priority process is running ,medium priority processes are waiting for
lowpriorityprocess,highpriorityprocessesarewaitingformediumpriorityprocesses.thi
sis calledPriorityinversion.
The two most common kinds of semaphores are counting semaphores
andbinarysemaphores.Countingsemaphoresrepresentmultipleresources,whil
e binary semaphores, as the name implies, represents two possible
states(generally0or1;lockedorunlocked).
Classicproblemsofsynchronization
1) Bounded-bufferproblem
Twoprocessesshareacommon,fixed–sizebuffer.
Producerputsinformationintothebuffer,consumertakesitout.
The problem arise when the producer wants to put a new item in the buffer,but it
isalready full. The solution is for the producer has to wait until the consumer
hasconsumed atleast one buffer. similarly if the consumer wants to remove an
itemfrom the buffer and sees that the buffer is empty,it goes to sleep until the
producerputssomethinginthebufferandwakesitup.

Thestructureoftheproducerprocess
do{
//
produceaniteminnextpwai
t(empty);
wait(mutex);
//addtheitemtothebuffer
signal
(mutex);signal(full);
}while(TRUE);

Thestructureoftheconsumerprocess
do{wait
(full);w
ait(mute
x);
//
removeanitemfrombuffertonextc
signal(mutex);
signal(empty);
//consumetheiteminnextc
}while(TRUE);

2) Thereaders-writersproblem
A database is to be shared among several concurrent processes.some processes
maywant only to read the database,some may want to update the database.If two
readersaccess the shared data simultaneously no problem.if a write,some other
processaccessthedatabasesimultaneouslyproblemarised.Writeshaveexcusiveaccessto

theshareddatabasewhilewritingtothedatabase.Thisproblemisknownasreaders-writes
problem.

Firstreaders-writersproblem
Noreaderbekeptwaitingunlessawriterhasalreadyobtainedpermissiontouse
thesharedresource.
Secondreaders-writesproblem:
Oncewriterisready,thatwriterperformsitswriteassoonaspossible.
A process wishing to modify the shared data must request the lock in write
mode.multiple processes are permitted to concurrently acquire a reader-writer lock
inreadmode.Areaderwriterlockinreadmode.butonlyoneprocessmayacquirethelockfo
rwritingasexclusiveaccessisrequiredforwriters.

Semaphoremutexinitializedto1
o Semaphorewrtinitializedto1
o Integerread countinitializedto0

Thestructureofawriterprocess
do{
wait(wrt);
// writing

isperformedsi
gnal(wrt);
}while(TRUE);

Thestructureofareaderprocess
do{
wait
(mutex) ;readc
ount++;
if (readcount ==
1)wait(wrt);
signal(mutex)
//
readingisperformedwait(mutex);read
count
--;
if (readcount ==
0)signal
(wrt) ;signal(mute
x);
}while(TRUE);
3) DiningPhilosophersproblem
Five philosophers are seated on 5 chairs across a table.Each philosopher has aplate
full of noodles. Each philosopher needs a pair of forks to eat it. There are only5
forks available all together. There is only one fork between any two plates
ofnoodles.
In order to eat, a philosopherlifts two forks, one to his leftand the other to hisright.
if he is successful in obtaining two forks, he starts eating after some time,
hestopseatingandkeepsboththeforks down.

What ifallthe5 philosophersdecide toeat atthesame time?


Allthe5philosopherswouldattempttopickuptwoforksatthesametime.So,noneofthemsucceed.

One simple solution is to represent each fork with a semaphore.a


philosophertriestograbaforkbyexecutingwait()operationonthatsemaphore.herel
eases his forks by executing the signal() operation.This solution
guaranteesthatnotwoneighbours are eatingsimultaneously.
Suppose all 5 philosophers become hungry simultaneously and each grabs his
leftfork,he willbedelayedforever.

ThestructureofPhilosopheri:
do{
wait (chopstick[i]);
wait( chopStick[ (i+1) % 5] );
// eat
signal(chopstick[i] );
signal(chopstick[(i+1)%5]);
//think
}while(TRUE);
Severalremedies:
1) Allowatmost4philosopherstobesittingsimultaneouslyatthetable.
2) Allowaphilosophertopickuphis forkonlyifbothforksareavailable.
3) Anoddphilosopherpicks upfirsthisleftforkandthenrightfork.an
evenphilosopherpicksuphisrightforkandthenhisleftfork.

MONITORS
The disadvantage of semaphore is thatitis unstructured construct. Waitand signal
operationscanbe scatteredinaprogramandhence debuggingbecomesdifficult.
A monitor is an object that contains both the data and procedures needed to perform allocation
ofa shared resource. To accomplish resource allocation using monitors, a process must call
amonitor entryroutine. Many processes may wantto enter themonitor at the same time. butonly
one process at a time is allowed to enter. Data inside a monitor may be either global to
allroutines within the monitor (or) local to a specific routine. Monitor data is accessible only
withinthe monitor. There is no way for processes outside the monitor to access monitor data.
This is aformofinformationhiding.
If a process calls a monitor entry routine while no other processes are executing inside
themonitor, the process acquires a lock on the monitor and enters it. while a process is in
themonitor, other processes may not enter the monitor to acquire the resource. If a process calls
amonitor entry routine while the other monitor is locked the monitor makes the calling
processwait outside the monitor until the lock on the monitor is released. The process that has
theresource will call a monitor entry routine to release the resource. This routine could free
theresource and wait for another requesting process to arrive monitor entry routine calls signal
toallow one of the waiting processes to enter the monitor and acquire the resource. Monitor
giveshighprioritytowaitingprocesses thantonewlyarrivingones.

Structure:
monitormonitor-name
{
//
sharedvariabledeclarationspro
cedure P1 (…)
{ …. }procedurePn (…)
{……}Initializationcode (…)
{…}
}
}
Processescancallproceduresp1,p2,p3……Theycannotaccessthelocalvariablesofthemonitor
Schematic viewofaMonitor

MonitorwithCondition Variables

Monitorprovidesconditionvariablesalongwithtwooperationsonthemi.e. waitandsignal.

wait(condition
variable)signal(condition
variable)
Everyconditionvariablehasanassociatedqueue.Aprocesscallingwaitonaparticular
condition variable is placed into the queue associated with that conditionvariable.A
process calling signal on a particular condition variable causes a
processwaitingonthatconditionvariabletoberemovedfromthequeueassociatedwithit.
Solution toProducerconsumerproblemusingmonitors:

monitorproducerc
onsumerconditionf
ull,empty;
intcount;
procedureinsert(item)
{
if(count==MAX)
wait(full);insert_i
tem(item);count=
count+1;if(count=
=1)signal(empty);
}
procedureremove()
{
if(count==0)wait(e
mpty);remove_item
(item);count=count
-1;if(count==MAX-
1)signal(full);
}
procedureproducer()
{
producerconsumer.insert(item);
}
procedureconsumer()
{
producerconsumer.remove();
}
Solutiontodiningphilosophersproblemusingmonitors

Aphilosophermaypickuphisforksonlyifbothofthemareavailable.Aphilosophercaneat
onlyifhistwoneighboursarenoteating.someotherphilosophercandelayhimselfwhenhe
ishungry.
Diningphilosophers.Take_forks():acquiresforks,whichmayblocktheprocess.
Eatnoodles()
Diningphilosophers.put_forks():releasestheforks.
Resumingprocesseswithinamonitor
Ifseveralprocessesaresuspendedoncondionxandx.signal()isexecutedbysomeprocess.then
how do we determine which of the suspended processes should be resumed next ?
solution is FCFS(process that has been waitingthe longestis resumedfirst).Inmany
circumstances, such simple technique is not adequate. alternate solution is
toassignprioritiesandwakeupthe process withthe highestpriority.

Resource allocation using


monitorbooleaninuse=false;condit
ionavailable;
//conditionvariable
monitorentryvoidgetresource()
{
if(inuse) //isresourceinuse
{
wait(available); waituntilavailableissignaled
}
inuse=true; //indicateresourceisnowinuse
}
monitorentryvoidreturnresource()
{
inuse=false; //indicate
resourceisnotinusesignal(available);//signala
waitingprocesstoproceed
}
UNIT-IV
MemoryManagement:

LogicalAnd PhysicalAddresses

An address generated by the CPU is commonly refereed as Logical Address, whereas


theaddress seen by the memory unit that is one loaded into the memory address register of
thememory is commonly refereed as the Physical Address. The compile time and load
timeaddressbindinggeneratestheidenticallogicalandphysicaladdresses.However,theexecutio
ntimeaddressesbindingschemeresultsindifferinglogicalandphysicaladdresses.

The set of all logical addresses generated by a program is known as Logical Address
Space,where as thesetofall physical addresses correspondingtotheselogicaladdresses
isPhysical Address Space.Now, the run timemappingfrom virtual address
tophysicaladdress is done by a hardware device known as Memory Management Unit.
Here in
thecaseofmappingthebaseregisterisknownasrelocationregister.Thevalueintherelocationregist
erisaddedtotheaddressgeneratedbyauserprocessatthetimeitissenttomemory
.Let's understand this situation with the help of example: If the base register contains
thevalue 1000,then an attempt by the user to address location 0 is dynamically relocated
tolocation1000,anaccesstolocation 346ismappedtolocation1346.
Memory-ManagementUnit(MMU)
Hardwaredevicethatmaps virtualtophysicaladdress

InMMUscheme,thevalueintherelocationregister
isaddedtoeveryaddressgeneratedbyauserprocessatthetimeitis senttomemory
Theuserprogramdealswithlogicaladdresses;itnever seestherealphysicaladdresses
Theuserprogram never sees the real physical address space, it always
dealswiththe Logical addresses. As we have two different type ofaddresses Logical
addressin therange (0tomax)and Physical addresses in
therange(RtoR+max)whereRisthevalue of relocation register.The user generates onlylogical
addresses and thinks thatthe process runs in location to 0 to max. As it is clear from the
above text that user programsupplies only logical addresses, these logical addresses must be
mapped to physical addressbefore theyareused.
BaseandLimitRegisters

Apair ofbaseandlimitregistersdefinethelogicaladdressspace

HARDWAREPROTECTIONWITHBASEANDLIMIT

Binding ofInstructionsand Data toMemory

Addressbinding ofinstructionsand datatomemoryaddressescanhappenat threedifferentstages

Compiletime:Ifmemorylocationknownapriori,absolutecodecanbegenerated;mustrecompilecode
ifstartinglocationchanges
Loadtime:Mustgeneraterelocatablecodeifmemorylocationisnotknownatcompiletime
Executiontime:Bindingdelayeduntilruntimeiftheprocesscanbemovedduringitsexecution
from
onememory segmentto another. Need hardware support for addressmaps (e.g., base
andlimitregisters)

MultistepProcessingofaUserProgram

DynamicLoading
Routine isnotloadeduntilitiscalled
Bettermemory-spaceutilization;unusedroutineisneverloaded
Usefulwhen largeamountsofcodeareneeded tohandleinfrequentlyoccurring cases
Nospecialsupportfromtheoperatingsystemisrequiredimplemented throughprogramdesign

DynamicLinking
Linkingpostponeduntilexecutiontime
Smallpieceofcode,stub,usedtolocatetheappropriatememory-residentlibraryroutine Stub
replaces itself with the address of the routine, and executes the
routineOperatingsystemneededtocheckifroutineisinprocesses’memoryaddressDynamiclinki
ngis particularlyusefulforlibraries
Systemalsoknownas shared libraries
Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back
intomemory for continued execution Backing store – fast disk large enough to accommodate copies
of allmemory images for all users; must provide direct access to these memory images Roll out, roll
in –swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped
outso higher-priority process can be loaded and executed Major part of swap time is transfer time;
totaltransfer time is directly proportional to the amount of memory swapped and Modified versions
ofswappingare foundonmanysystems(i.e.,UNIX,Linux,andWindows)
Systemmaintainsa ready queueofready-to-runprocesseswhich havememoryimagesondisk

SchematicView ofSwapping

ContiguousAllocation
Mainmemoryusuallyintotwopartitions:
Residentoperatingsystem, usuallyheldinlowmemorywithinterruptvector
UserprocessesthenheldinhighmemorynRelocationregistersusedtoprotectuserprocessesfromeachother,andf
romchangingoperating-systemcodeanddata
Baseregister containsvalueofsmallestphysicaladdress
Limitregistercontainsrangeoflogicaladdresses–eachlogicaladdressmustbelessthanthelimitregister
MMUmaps logicaladdressdynamically
HardwareSupportforRelocationandLimitRegisters
Multiple-partitionallocation
Hole – block of available memory; holes of various size are scattered throughout
memoryWhenaprocessarrives,itisallocatedmemoryfromaholelargeenoughtoaccommodateit

Contiguous memory allocation is one of the efficient ways of allocating main memory
tothe processes. The memory is divided into two partitions. One for the Operating System
andanother for the user processes. Operating System is placed in low or high memory
dependingontheinterruptvectorplaced.Incontiguousmemoryallocationeachprocessiscontainedi
na singlecontiguous sectionofmemory.

Memoryprotection

Memory protection is required to protect Operating System from the user processes and
userprocesses from one another. A relocation register contains the value of the smallest
physicaladdress for example say 100040. The limit register contains the range of logical
address forexample say 74600. Each logical address must be less than limit register. If a
logical addressis greater than the limit register, then there is an addressing error and it is
trapped. The limitregisterhenceoffersmemoryprotection.

The MMU, that is, Memory Management Unit maps the logical address dynamically, that
isat run time, by adding the logical address to the value in relocation register. This added
valueisthephysicalmemoryaddresswhichissenttothememory.

The CPU scheduler selects a process for execution and a dispatcher loads the limit
andrelocation registers with correct values. The advantage of relocation register is that it
providesanefficientwaytoallow the OperatingSystemsize tochange dynamically.

Memoryallocation

There are two methods namely, multiple partition method and a general fixed
partitionmethod.Inmultiplepartitionmethod,thememoryisdividedintoseveralfixedsizepartition
s.Oneprocessoccupieseach partition.Thisschemeisrarely
usednowadays.Degreeofmultiprogrammingdependsonthenumberofpartitions.Degreeofmulti
programming is the number of programs that are in the main memory. The CPU isnever left
idle in multiprogramming. This was used by IBM OS/360 called MFT.
MFTstandsforMultiprogrammingwitha FixednumberofTasks.

Generalizationoffixedpartitionschemeisused inMVT.MVTstandsforMultiprogrammingwith a
Variable number of Tasks. The Operating System keeps track of which parts ofmemory are
available and which is occupied. This is done with the help of a table that
ismaintainedbytheOperatingSystem.Initiallythewholeoftheavailablememoryistreated as
one large block of memory called a hole. The programs that enter a system are maintained
inan input queue. From the hole, blocks of main memory are allocated to the programs in
theinput queue. If the hole is large, then itis split into two, and one half is allocated to
thearriving process and the other half is returned. As and when memory is allocated, a set
ofholesinscattered.Ifholes areadjacent,theycanbemerged.
Now there comes a general dynamic storage allocation problem. The following are
thesolutionstothe dynamicstorage allocationproblem.

 First fit: The first hole that is large enough is allocated. Searching for the
holesstarts from the beginning of the set of holes or from where the previous first fit
searchended.

 Best fit: The smallest hole that is big enough to accommodate the
incomingprocessisallocated.Iftheavailableholesareordered,thenthesearching
canbereduced.

 Worstfit:The largestoftheavailableholesisallocated.
Example:

Firstandbestfitsdecreasetimeandstorageutilization.Firstfitisgenerallyfaster.Fragmentation
Thedisadvantageofcontiguousmemoryallocationisfragmentation.Therearetwotypesoffrag
mentation, namely,internalfragmentationandExternalfragmentation.
Internalfragmentation

When memory is free internally, that is inside a process but it cannot be used, we call
thatfragmentas internal fragment. For example say a hole of size 18464 bytes is available.
Letthe size of the processbe 18462. If the holeis allocated to this process, then twobytes
areleftwhichisnotused.Thesetwobyteswhichcannotbeusedformstheinternalfragmentation. The
worst part of it is that the overhead to maintain these two bytes is morethantwobytes.
Externalfragmentation
Allthethreedynamicstorageallocationmethodsdiscussedabovesufferexternalfragmentation.
When the total memory space that is got by adding the scattered holes
issufficienttosatisfyarequestbutitisnotavailablecontiguously,thenthistypeof
fragmentationiscalledexternalfragmentation.

The solution to this kind of external fragmentation is compaction. Compaction is a


methodbywhichallfreememorythatarescatteredareplacedtogetherinonelargememoryblock.It
is to be noted that compaction cannot be done if relocation is done at compile time
orassemblytime.Itispossibleonlyifdynamicrelocationisdone,thatisrelocationatexecutiontime.

Onemoresolution toexternal fragmentationistohavethelogical addressspaceandphysical


address space to be non contiguous. Paging and Segmentation are popular
noncontiguousallocationmethods.
Exampleforinternalandexternalfragmentation

Paging
A computer can address more memory than the amount physically installed on the
system.This extra memory is actually called virtual memory and it is a section of a hard
that's set upto emulate the computer's RAM. Paging technique plays an important role in
implementingvirtualmemory.
Paging is a memory management technique in which process address space is broken
intoblocks of the same size called pages (size is power of 2, between 512 bytes and 8192
bytes).The size ofthe processismeasuredinthenumberofpages.
Similarly, main memory is divided into small fixed-sized blocksof(physical)memorycalled
frames and the size of a frame is kept the same as that of a page to have
optimumutilizationofthemainmemoryandtoavoidexternalfragmentation.
PagingHardware

AddressTranslation
Pageaddressiscalledlogicaladdress andrepresentedbypagenumberandtheoffset.
Frameaddressiscalledphysicaladdressand representedbyaframenumberandtheoffset.
LogicalAddress=Pagenumber +pageoffset
PhysicalAddress=Framenumber+pageoffset
Adatastructurecalledpagemaptableisusedtokeeptrackoftherelationbetweenapageofaprocess
toaframeinphysicalmemory.
Paging ModelofLogicaland PhysicalMemory
PagingExample

32-bytememory and 4-bytepages

FreeFrames

When the system allocates a frame to any page, it translates this logical address into
aphysical address and create entry into the page table to be used throughout execution of
theprogram.
When a process is to be executed, its corresponding pages are loaded into any
availablememory frames. Suppose you have a program of 8Kb but your memory can
accommodateonly5Kbatagivenpointintime,thenthepagingconceptwillcomeintopicture.Whena
computer runs out of RAM, the operating system (OS) will move idle or unwanted pages
ofmemory to secondary memory to free up RAM for other processes and brings them
backwhenneededbytheprogram.
This process continues during the whole execution of the program wherethe OS
keepsremoving idle pages from the main memory and write them onto the secondary
memory andbringthembackwhenrequiredbytheprogram.
ImplementationofPageTable

Pagetableiskeptinmainmemory
Page-tablebaseregister(PTBR)pointstothepagetable
Page-tablelengthregister(PRLR)indicatessizeofthepagetable
Inthisschemeeverydata/
instructionaccessrequirestwomemoryaccesses.Oneforthepagetableandoneforthedata/instruction.
Thetwomemoryaccessproblemcanbesolvedbytheuseofaspecialfast-
lookuphardwarecachecalledassociative memoryortranslationlook-aside buffers(TLBs)
PagingHardwareWithTLB

MemoryProtection
Memoryprotectionimplementedbyassociatingprotectionbitwitheachframe
Valid-invalid bitattachedtoeachentryinthepagetable:
“valid”indicatesthattheassociatedpageisintheprocess’logicaladdressspace,andisthusalegalpage
“invalid”indicatesthatthe pageisnotinthe process’logicaladdressspace
Valid(v)orInvalid(i)BitInAPageTable
Shared
PagesShared
code
Onecopyofread-only(reentrant)codesharedamongprocesses(i.e.,texteditors,compilers,window
systems).
Sharedcodemustappear insamelocationinthelogicaladdressspaceofallprocesses
Privatecode anddata
Eachprocess keepsaseparatecopyofthecodeanddata

Thepagesfortheprivatecodeanddatacanappear anywhereinthelogicaladdressspace
SharedPages Example
StructureofthePageTable

Hierarchical
PagingHashed Page
TablesInvertedPageT
ables

HierarchicalPageTables

Breakupthelogicaladdressspaceintomultiplepagetables

Asimpletechniqueisatwo-levelpagetable
Two-LevelPage-TableScheme

Two-LevelPagingExample
Alogicaladdress(on32-bitmachinewith1Kpagesize)isdividedinto:apage
numberconsistingof22bits
apageoffsetconsistingof10bits
Sincethepagetableispaged,thepagenumberisfurther dividedinto:
a12-bitpagenumbera10-
bitpageoffsetThus,alogicaladdressisasfoll
ows:

wherepiisanindexintotheouterpagetable,andp2isthedisplacementwithinthepageoftheouterpagetable

Page number pageoffset


pi p2 d10
12 10
Address-TranslationScheme

Three-levelPagingScheme

HashedPageTables

Commoninaddressspaces>32 bits
Thevirtualpagenumberishashedintoapagetable
Thispagetablecontainsachainofelementshashingtothesamelocation
Virtual page numbers are comparedin this chain searchingforamatch
Ifamatchisfound,thecorrespondingphysicalframeisextracted

HashedPageTable
InvertedPageTable

One entryforeachrealpage ofmemory

Entryconsistsofthevirtualaddressofthe
pagestoredinthatrealmemorylocation,withinformationabouttheprocessthatowns thatpage
Decreases memory needed tostore each page table,butincreases time needed tosearch the
tablewhenapagereferenceoccurs
Usehashtabletolimitthesearchtoone—or at mostafew—page-tableentries
InvertedPageTableArchitecture

AdvantagesandDisadvantagesofPaging
Hereisalistofadvantagesanddisadvantagesofpaging−
 Pagingreducesexternalfragmentation,butstillsuffersfrominternalfragmentation.
 Pagingissimpletoimplementandassumedasanefficientmemorymanagementtechniqu
e.
 Dueto equalsizeofthepagesandframes,swapping becomesveryeasy.
 Pagetablerequiresextramemoryspace,somaynotbegoodforasystemhavingsmallRA
M.

Segmentation
Memory-
managementschemethatsupportsuserviewofmemoryAprogramisacollectionofsegments
 Asegmentisalogicalunitsuchas:
 mainprogram
 Procedure
 functionmethod
 object
 localvariables,globalvariables
 commonblock
 stack
 symboltable
 arrays

User’sViewof aProgram

SegmentationArchitecture
Logicaladdressconsistsofatwotuple:o<s
egment-number,offset>,
Segmenttable–mapstwo-
dimensionalphysicaladpdrehsyses;iecaachltambleemntroyrhyas:space base–
containsthestartingphysicaladdresswherethesegmentsresideinmemorylimit–
specifiesthelengthofthesegment
Segment-tablebaseregister(STBR)pointsto thesegmenttable’slocation in memorySegment-
tablelengthregister(STLR)indicatesnumberofsegmentsusedbyaprogram;segmentnumbersislegal
if s <STLR
Protection
Witheachentryin segmenttableassociate:
validationbit=0Þillegalsegmentread/
write/execute privileges
Protectionbitsassociatedwithsegments;codesharingoccursatsegmentlevel
Sincesegmentsvaryinlength,memoryallocationisadynamicstorage-
allocationproblemAsegmentationexampleis shown inthefollowingdiagram
SegmentationHardware

ExampleofSegmentation

Segmentationwithpaging
Instead of an actual memory location the segment information includes the address of a
pagetablefor the segment. When a program references a memory location the offset is
translatedto a memory address using the page table. A segment can be extended simply by
allocatinganothermemorypage andaddingittothesegment'spage table.
An implementation of virtual memoryon a system using segmentation with paging
usuallyonly moves individual pages back and forth between main memory and secondary
storage,similar to a paged non-segmented system. Pages of the segment can be located
anywhere inmain memory and need not be contiguous. This usually results in a reduced
amount ofinput/output betweenprimaryand secondarystorageand
reducedmemoryfragmentation.

VirtualMemory
Virtual Memory is a space where large programs can store themselves in form of
pageswhile their execution and only the required pages or portions of processes
areloadedintothe main memory. This technique is useful as large virtual memory is provided
for userprogramswhenaverysmallphysicalmemoryis there.
Inrealscenarios,mostprocessesnever needalltheir pagesatonce,forfollowingreasons:
 Errorhandlingcodeisnotneededunlessthatspecificerroroccurs,someofwhichare
quiterare.
 Arraysareoftenover-sizedforworst-
casescenarios,andonlyasmallfractionofthearraysareactuallyusedinpractice.
 Certainfeaturesofcertainprogramsare rarelyused.

Fig.Diagramshowingvirtualmemorythatis largerthanphysicalmemory.
Virtualmemoryiscommonlyimplementedbydemandpaging.Itcanalsobeimplementedinasegmentationsyste
m.Demand segmentationcanalsobeusedtoprovidevirtualmemory.

BenefitsofhavingVirtualMemory:
1. Largeprogramscanbewritten,asvirtualspaceavailableishugecomparedtophysical.
2. LessI/Orequired,leadsto fasterand easyswappingofprocesses.
3. Morephysicalmemoryavailable,asprogramsarestoredonvirtualmemory,sotheyoccupyv
eryless spaceonactualphysicalmemory.

DemandPaging

A demand paging is similar to a paging system with swapping(Fig 5.2). When we want to execute
aprocess,weswap itintomemory.Ratherthanswapping theentireprocessintomemory.

When a process is to be swapped in, the pager guesses which pages will be used before the process
isswapped out again Instead of swapping in a whole process, the pager brings only those necessary
pagesinto memory. Thus, it avoids reading into memory pages that will not be used in anyway,
decreasing theswaptimeandthe amountofphysicalmemoryneeded.

Hardware support is required to distinguish between those pages that are in memory and those
pagesthat are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can be
checkedchecking the bit and marking a page will have no effect if the process never attempts to access
thepages. While the process executes and accesses pages that are memory resident, execution
proceedsnormally.
Fig. Transfer ofapagedmemorytocontinuousdiskspace

Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating
system'sfailure tobringthedesiredpageintomemory.

Initiallyonlythosepagesareloadedwhichwillberequiredtheprocess immediately.
Thepagesthatarenotmovedintothememoryaremarkedasinvalidinthepagetable.For
an invalid entry the restof the tableis empty. In case of pages thatareloadedin thememory,
they are marked as valid along with the information about where to find
theswappedoutpage.
When the process requires any of the page that is not loaded into the memory, a page
faulttrapistriggeredandfollowingsteps arefollowed,
1. The memory address which is requested by the process is first checked, to verify
therequestmadebytheprocess.
2. Ifits found tobeinvalid, theprocessisterminated.
3. In case the request by the process is valid, a free frame is located, possibly from
afree-framelist,where the requiredpage willbemoved.
4. A new operation is scheduled to move the necessary page from disk to the
specifiedmemory location. ( This will usually block the process on an I/O wait, allowing
some otherprocesstousethe CPUinthemeantime.)
5. When the I/O operation is complete, the process's page tableis updated with
thenewframenumber,andtheinvalidbitischangedtovalid.

Fig.Stepsinhandlingapagefault

6. The instruction that caused the page fault must now be restarted from the
beginning.Therearecaseswhennopagesareloadedintothememoryinitially,pagesareonlyloadedw
hendemandedbytheprocessbygeneratingpagefaults.ThisiscalledPureDemand
Paging.
The only major issue with Demand Paging is, after a new page is loaded, the process
startsexecution from the beginning. It is not a big issue for small programs, but for larger
programsitaffects performancedrastically.

What isdirtybit?
WhenabitismodifiedbytheCPUandnotwrittenbacktothestorage,itiscalledasadirtybit.Thisbitispr
esentinthememorycache orthevirtualstoragespace.
AdvantagesofDemandPaging:
1. Largevirtualmemory.
2. Moreefficient useofmemory.
3. Unconstrainedmultiprogramming.There isnolimit ondegreeofmultiprogramming.
DisadvantagesofDemandPaging:
1. Number of tables and amount of processor over head for handling page interrupts are greater than
inthe caseofthe simplepagedmanagementtechniques.
2. duetothelackofanexplicitconstraintsonajobsaddressspacesize.

Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into
thememory. This allows us to get more number of processes into the memory at the same
time.butwhathappenswhenaprocessrequestsformorepagesandnofreememoryisavailabletobrin
gthemin.Followingsteps canbe takentodealwiththisproblem:
1. Puttheprocessin
thewaitqueue,untilanyotherprocessfinishesitsexecutiontherebyfreeingframes.
2. Or,removesomeother processcompletelyfromthememorytofreeframes.
3. Or, findsomepagesthatarenotbeingusedrightnow,
movethemtothedisktogetfreeframes.ThistechniqueiscalledPagereplacementandismostcommo
nlyused.Wehavesome greatalgorithms tocarryonpage replacementefficiently.
PageReplacementAlgorithm
Page replacement algorithms are the techniques using which an Operating System
decideswhichmemory pages toswap out, write todisk when apageof memory needs
tobeallocated. Paging happens whenever a page fault occurs and a free page cannot be used
forallocation purpose accounting to reason that pages are not available or the number of
freepagesislowerthanrequiredpages.
When the page that was selected for replacement and was paged out, is referenced again,
ithas to read in from disk, and this requires for I/O completion. This process determines
thequality of the page replacement algorithm: the lesser the time waiting for page-ins, the
betteristhealgorithm.
A page replacement algorithm looks at the limited information about accessing the
pagesprovided by hardware, and tries to select which pages should be replaced to minimize
thetotal numberof pagemisses,while balancingitwith the costs of primary storage
andprocessortimeofthealgorithmitself.Therearemanydifferentpagereplacementalgorithms.We
evaluateanalgorithmby runningitonaparticularstringofmemoryreference andcomputingthe
numberofpagefaults,
ReferenceString
The string of memory references is called reference string. Reference strings are
generatedartificiallyorbytracing a givensystemand recordingthe
addressofeachmemoryreference.
Thelatterchoiceproducesalarge numberofdata,wherewenotetwothings.
 Foragivenpagesize,weneedtoconsider onlythepagenumber, nottheentireaddress.
 If we have a reference to a page
p,thenanyimmediatelyfollowingreferencestopagepwillnevercauseapagefault.
Pagepwillbeinmemoryafterthefirstreference;theimmediatelyfollowingreferences willnotfault.
 For example,consider thefollowingsequenceofaddresses−123,215,600,1234,76,96
 Ifpagesizeis100,thenthereferencestringis1,2,6,12,0,0
FirstInFirstOut(FIFO)algorithm
 Oldestpageinmainmemoryistheonewhichwillbeselectedforreplacement.
 Easytoimplement,keepalist,replacepagesfromthetailandaddnewpagesatthe head.

OptimalPagealgorithm
 Anoptimalpage-replacementalgorithmhasthelowestpage-faultrateofallalgorithms.
An optimal page-replacementalgorithm exists, and has been called OPT orMIN.

 Replacethepagethatwillnotbeusedfor
thelongestperiodoftime.Usethetimewhenapageistobeused.
LeastRecentlyUsed(LRU)algorithm
 Pagewhichhasnotbeenusedforthelongesttimeinmainmemoryistheonewhichwillbese
lectedforreplacement.
 Easytoimplement, keepalist,replacepages bylookingbackintotime.
Secondchancepagereplacementalgorithm
 SecondChancereplacementpolicyiscalledtheClockreplacementpolicy...
 In the Second Chance page replacement policy, the candidate pages for removal are consider in
around robin matter, and a page that has been accessed between consecutive considerations will not
bereplaced.
The page replaced is the one that - considered in a round robin matter - has not been accessed since
itslastconsideration.
 Implementation:
o Adda"secondchance"bittoeachmemoryframe.
o Eachtimeamemoryframeisreferenced,setthe"secondchance"bittoONE(1)-thiswillgivetheframe
asecondchance...
o AnewpagereadintoamemoryframehasthesecondchancebitsettoZERO(0)
o Whenyouneedtofindapageforremoval,lookinaroundrobinmannerinthememoryframes:
 IfthesecondchancebitisONE,resetitssecondchancebit(toZERO)andcontinue.
 IfthesecondchancebitisZERO,replacethepageinthatmemoryframe.
 ThefollowingfigureshowsthebehavioroftheprograminpagingusingtheSecondChancepagereplacementp
olicy:

o WecanseenotablythatthebadreplacementdecisionmadebyFIFOisnotpresentinSecondchance!!!
o Thereareatotalof9pagereadoperationstosatisfythetotalof18pagerequests-justasgoodasthe
morecomputationallyexpensive LRUmethod!!!
NRU(Not RecentlyUsed) PageReplacementAlgorithm -This algorithm requires thateach pagehave
two additional status bits 'R' and 'M' called reference bit and change bit respectively. The referencebit(R)
is automatically set to 1 whenever the page is referenced. The change bit (M) is set to 1 wheneverthe
page is modified. These bits are stored in the PMT and are updated on every memory reference.When a
page fault occurs, the memory manager inspects all the pages and divides them into 4 classesbasedonR
andM bits.
 Class1: (0,0)−neither recentlyusednormodified-thebestpagetoreplace.
 Class2:(0,1) −notrecentlyusedbutmodified-thepagewillneedtobewrittenoutbeforereplacement.
 Class3:(1,0)−recentlyusedbutclean-probably willbeused againsoon.
 Class4:(1,1) −recentlyusedandmodified-
probablywillbeusedagain,andwriteoutwillbeneededbeforereplacingit.
Thisalgorithmremovesa pageat randomfromthelowestnumbered non-emptyclass.
Unit V

File Management:
FileSystem

FileConcept:

Computers can store information on various storage media such as, magnetic
disks,magnetictapes,opticaldisks.Thephysicalstorageisconvertedintoalogicalstorageunit
by operating system. The logical storage unit is called FILE. A file is a collection
ofsimilar records. A record is a collection of related fields that can be treated as a unit
bysome application program. A field is some basic element of data. Any individual
fieldcontainsa singlevalue.Adatabaseis collectionofrelateddata.

Student Marks Marks Fail/Pas


KUMA 85 86 P
LAKSH 93 92 P
DATAFILE

Student name, Marks in sub1, sub2, Fail/Pass isfields. The collection of fields
iscalledaRECORD.RECORD:
LAKSH 93 92 P
Collectionoftheserecordsiscalled adatafile.

FILEATTRIBUTES:

1. Name:Afileisnamedfortheconvenienceoftheuserandisreferredbyitsname.Ana
meisusuallya stringofcharacters.
2. Identifier :Thisuniquetag,usuallyanumber,identifies thefilewithinthefilesystem.
3. Type:Filesareofsomanytypes.Thetypedepends ontheextensionofthefile.

Example:
.exeExecutable file
.objObjectfile
.srcSourcefile
4. Location :This informationis a pointer toa device andtothe location ofthe
fileonthatdevice.

5. Size:Thecurrentsizeofthefile(inbytes,words,blocks).
6. Protection:Accesscontrolinformationdetermineswhocandoreading,writin
g,executingandsoon.
7. Time,Date,Useridentification:Thisinformationmaybe
keptforcreation,lastmodification,lastuse.

FILEOPERATIONS

1. Creatingafile:Twostepsareneededtocreateafile.Theyare:
 Checkwhetherthespaceisavailableornot.
 If the space is available then made an entry for the new file in
thedirectory.The entryincludesnameofthefile,pathofthefile,etc…
2. Writing a file : To write a file, we have to know 2 things. One is name of
thefile and second is the information or data to be written on the file, the system
searchesthe entired given location for the file. If the file is found, the system must keep
a writepointertothelocationinthefile wherethe nextwriteistotake place.
3. Readinga file :To reada file,firstofallwesearchthe directoriesforthefile,ifthe file
is found, the system needs to keep a read pointer to the location in the file
wherethenextreadistotakeplace.Oncetheread hastakenplace,theread pointerisupdated.
4. Repositioning within a file : The directory is searched for the
appropriateentry andthecurrentfilepositionpointeris
repositionedtoagivenvalue.Thisoperationisalsocalledfile seek.
5. Deleting a file : To delete a file, first of all search the directory for
namedfile,thenreleasedthefilespace anderase thedirectoryentry.
6. Truncating a file : To truncate a file, remove the file contents only but,
theattributesareasitis.

FILETYPES:Thenameofthefilesplitinto2parts.OneisnameandsecondisExtension.Thefile
typeis dependingonextensionofthefile.

FileType Extension Purpose


Executable .exe Ready to
.com run(or)
.bin ready
to run
machine
Sourcecode .c Source codein
.cpp variouslan
.asm guages.
Object .obj Compiled,
.o machine
Batch .bat Commandsto
.sh thecommand
Text .txt Textual
.doc data,do
cume
nts
Wordprocessor .doc Various
.wp wordproc
.rtf essor
form
ats
Library .lib Librariesof
.dll routinesfor
Print orView .pdf Binaryfileina
.jpg formatfor
Archive .arc Relatedfiles
.zip groupedintoa
Multimedia .mpeg Binary
.mp3
.avi filecontainingau
dio
oraudio/video

FILESTRUCTURE

File types also can be used to indicate the internal structure of the file. The
operatingsystem requires that an executable file have a specific structure so that it can
determinewhere in memory to load the file and what the location of the first instruction
is. If OSsupports multiple file structures, the resulting size of OS is large. If the OS
defines 5different file structures, it needs to contain the code to support these file
structures. AllOS must support at least one structure that of an executable file so that the
system is abletoloadandrunprograms.

INTERNALFILESTRUCTURE

In UNIX OS, defines all files to be simply stream of bytes. Each byte is
individuallyaddressable by its offset from the beginning or end of the file. In this case,
the logicalrecord sizeis1byte.Thefilesystem
automaticallypacksandunpacksbytesintophysicaldiskblocks,say512bytesperblock.

Thelogical recordsize,physicalblocksize,packingdetermineshowmanylogicalrecords are in


each physical block. The packing can be done by the user’s applicationprogram or OS. A
file may be considered a sequence of blocks. If each block were
512bytes,afileof1949byteswouldbeallocated4blocks(2048bytes).Thelast99bytes
wouldbewasted.Itiscalledinternalfragmentationallfilesystemssufferfrominternalfragmentation
,thelargertheblock size,thegreatertheinternalfragmentation.
FILEACCESSMETHODS

Filesstoresinformation,thisinformationmustbeaccessedandreadintocomputermemory.The
rearesomanywaysthattheinformationinthefilecanbeaccessed.

1. Sequentialfileaccess:

Informationinthefileisprocessedinorderi.e.onerecordaftertheother.Magnetic
tapesare supportingthistypeoffile accessing.
Eg : A file consisting of 100 records, the current position of read/write head is
45threcord, suppose we want to read the 75 th record then, it access sequentially from
45,46,47
……..74,75.Sotheread/write headtraverse allthe recordsbetween45 to75.

2. Directaccess:

Directaccessisalsocalledrelativeaccess.Hererecordscanread/writerandomlywithout any
order. The direct access method is based on a disk model of a file,
becausedisksallowrandomaccesstoanyfile block.
Eg : A disk containing of 256 blocks, the position of read/write head is at 95 th block.
Theblock is to be read or write is 250th block. Then we can access the 250th block
directlywithoutanyrestrictions.

Eg : CD consists of 10 songs,at present we are listening song 3, If we want to


listensong10,we canshiftto10.

3. IndexedSequentialFileaccess

Themaindisadvantageinthesequentialfileis,ittakesmoretimetoaccessaRecord
.Recordsareorganizedin sequencebasedonakeyfield.Eg:
A file consisting of 60000 records,the master index divide the total records into 6
blocks,each block consisiting of a pointer to secondary index.The secondary index divide
the10,000recordsinto10indexes.Eachindexconsistingofapointertoitsorginal
location.Eachrecordintheindex fileconsistingof2 field,Akeyfield andapointer field.

DIRECTORYSTRUCTURE
Sometimes the file system consisting of millions of files,at that situation it is very
hardto manage the files. To manage these files grouped these files and load one group
intoone partition.

Eachpartitioniscalledadirectory.adirectorystructureprovidesamechanismfororganizingma
nyfilesinthefile system.

OPERATIONONTHEDIRECTORIES:
1. Searchforafile:Searchadirectorystructureforrequiredfile.

2. createafile : New filesneedtobecreated,addedtothedirectory.

3. Deleteafile : Whenafile isnolongerneeded,wewanttoremove itfromthe

directory.

4. Listadirectory: Wecanknowthelistoffiles inthedirectory.

5. Renameafile :
Wheneverweneedtochangethenameofthefile,wecanchangethename.
6. Traversethefilesystem:Weneedtoaccesseverydirectoryandeveryfilewithinadi
rectorystructurewe cantraverse thefilesystem
Thevarious directorystructures

1. Singleleveldirectory:

Thedirectorysystemhavingonlyonedirectory,itconsistingofallfiles
some timesitissaid tobe root directory.

E.g :- Here directory containing 4 files (A,B.C,D).the advantage of the


schemeis its simplicity and the ability to locate files quickly.The problem is
differentusersmayaccidentallyuse the samenamesfortheirfiles.

E.g :- If user 1 creates a files caled sample and then later user 2 to creates a
filecalled sample,then user2’s file will overwrite user 1 file.Thats why it is not
usedinthemultiusersystem.

2. Twoleveldirectory:

Theprobleminsingleleveldirectoryisdifferentusermaybeaccidentallyuse
thesamenamefortheirfiles.Toavoidthisproblemeachuserneedaprivatedirectory,

Nameschosenbyoneuserdon'tinterferewithnameschosenbyadifferentuser.

Rootdirectoryisthefirstleveldirectory.user1,user2,user3areuserlevelofdirectoryA,B,
C arefiles.

3. Treestructureddirectory:

Twoleveldirectoryeliminatesnameconflictsamongusersbutitisnotsatisfactory for
users with a large number of files.To avoid this create the sub-directory and load
the same type of files into the sub-directory.so, here each canhave
asmanydirectoriesareneeded.
Thereare2typesofpath

1. Absoultepath
2. Relativepath
Absoultepath:Beggingwithrootandfollowsapathdowntospecifiedfilesgivingdirec
tory,directorynameonthepath.
Relativepath: Apathfromcurrentdirectory.

4. Acyclicgraphdirectory

Multiple users are working on a project, the projectfilescan be stored in acomman


sub-directory of themultipleusers.This typeof directory is
calledacyclicgraphdirectory.Thecommondirectorywillbedeclaredashareddirectory.
The graph contain no cycles with shared files, changes made by oneuser are made
visible to other users.A file may now have multiple absolute paths.when shared
directory/file is deleted, all pointers to the directory/ files also to beremoved.

5. Generalgraphdirectory:
Whenweaddlinkstoanexistingtreestructureddirectory,thetreestructureisd
estroyed,resultingisa simple graphstructure.

Advantages:- Traversingiseasy.Easysharingispossible.
Filesystemstructure:
Disk provides the bulk of secondary storage on which a file system is
maintained.They have 2 characteristics thatmake them a convenientmedium for
storingmultiplefiles.
1. A disk can be rewritten in place. It is possible to read a block
fromthedisk,modifythe block,and writeitbackintosameplace.
2. Adiskcanaccessdirectlyanyblockofinformationitcontains.

ApplicationPrograms

LogicalFileSystem

FileOrganisationModule

BasicFileSystem

I/OControl

Devices

I/OControl:consistsofdevicedriversandinterrupthandlerstotransferinformation
between the main memory and the disk system. The device driverwrites specific
bit patterns to special locations in the I/O controller’s memory totellthe
controllerwhichdevicelocationtoactonandwhatactionstotake.
The Basic File System needs only to issue commands to the appropriate
devicedriver to read and write physical blocks on the disk. Each physical block
isidentified by its numeric disk address (Eg. Drive 1, cylinder 73, track2,
sector10).

The File Organization Module knows about files and their logical blocks
andphysical blocks. By knowing the type of file allocation used and the location
ofthe file, file organization module can translate logical block address to
physicaladdresses for the basic file system to transfer. Each file’s logical blocks
arenumbered from 0 to n. so, physical blocks containing the data usually do
notmatchthelogical numbers.Atranslationisneeded tolocateeachblock.
The Logical File System manages all file system structure except the actual
data(contents of file). It maintains file structure via file control blocks. A file
controlblock (inode in Unix file systems) contains information about the file,
ownership,permissions,locationofthefilecontents.

FileSystemImplementation:

Overview:

A Boot Control Block (per volume) can contain information needed by the
systemto boot an OS from that volume. If the disk does not contain an OS, this
block canbe empty.

A Volume Control Block (per volume) contains volume (or partition) details,
suchas number of blocks in the partition, size of the blocks, a free block,
countandfree blockpointers,free FCB count,FCB pointers.
ATypicalFile ControlBlock

A Directory Structure (per file system) is used to organize the files. A PER-
FILEFCBcontainsmanydetailsaboutthefile.
A file has been created; it can be used for I/O. First, it must be opened. The
open( )call passes a file name to the logical file system. The open( ) system call
Firstsearches the system wide open file table to see if the file is already in use by
anotherprocess.Ifitis,aperprocessopenfiletableentryiscreatedpointingtotheexistingsy
stem wide open file table. If the file is not already open, the directory structure
issearchedforthegivenfile name.Oncethe fileisfound,FCBiscopied into asystem
wide open file table in memory. This table not only stores the FCB but also
tracksthe numberofprocesses thathavethefileopen.
Next, an entry is made in the per – process open file table, with the pointer to
theentry in the system wide open file table and some other fields. These are the
fieldsinclude a pointer to the current location in the file (for the next read/write
operation)and the access mode in which the file is open. The open () call returns a
pointer tothe appropriate entry in the per-process file system table. All file
operations arepreformed via this pointer. When a process closes the file the per-
process tableentry is removed. And the system wide entry open count is
decremented. When allusers that have opened the file close it, any updated metadata
is copied back to thediskbasedirectorystructure.Systemwide open filetable
entryisremoved.
System wide open file table contains a copy of the FCB of each
openfile,other information.Per process open file table,contains a
pointertotheappropriateentryinthesystemwide openfile
table,otherinformation.
AllocationMethods– Contiguous
Anallocationmethodreferstohowdiskblocksareallocatedforfiles:
Contiguousallocation–
eachfileoccupiessetofcontiguousblocksoBestperformanceinmostcases
o Simple– onlystartinglocation(block#)andlength(numberofblocks)are required
o Problems include finding space for file, knowing file
size,externalfragmentation,needforcompactionoff-line (downtime)oron-line

Linked
Linked allocation – each file a linked
listofblocksoFileendsatnilpointer
o Noexternalfragmentation
o Eachblockcontainspointertonextblock
o Nocompaction,externalfragmentation
o Freespacemanagementsystemcalledwhennewblockneeded
o Improveefficiencybyclusteringblocksintogroupsbutincrease
sinternalfragmentation
o Reliabilitycan beaproblem
o Locating a block can take many
I/OsanddiskseeksFAT(FileAllocationTabl
e)variation
o Beginningofvolumehastable,indexedbyblocknumber
o Muchlikealinkedlist,butfasterondiskandcacheable
File-AllocationTable

Indexedallocation
o Each filehasitsownindexblock(s) ofpointerstoitsdatablocks
Free-SpaceManagement
Filesystemmaintainsfree-spacelisttotrackavailableblocks/
clustersLinkedlist(freelist)
o Cannotgetcontiguousspaceeasily
o Nowasteofspace
o Noneedtotraversetheentirelist

1.Bitmap or Bit vector –


A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a disk block. The
bitcan take twovalues:0and1: 0indicates thatthe block is allocated and1indicates afreeblock.The given
instance of disk blocks on the disk in Figure 1 (where green blocks are allocated) can
berepresentedbyabitmapof16bitsas:0000111000000110.
Advantages–
 Simpletounderstand.
 Finding the first free block is efficient. It requires scanning the words (a group of 8 bits) in a
bitmapfor a non-zero word. (A 0-valued word has all bits 0). The first free block is then found by
scanning forthe first1bitinthenon-zeroword.

LinkedFreeSpaceListonDisk

In this approach, the free disk blocks are linked together i.e. a free block contains a pointer to the
nextfree block. The block number of the very first disk block is stored at a separate location on disk and
isalsocachedinmemory.
Grouping
Modifylinkedlisttostoreaddressofnextn-1freeblocksinfirstfreeblock,plusapointerto
nextblockthatcontainsfree-block-pointers(likethisone).
Anadvantage ofthisapproachisthat
theaddressesofagroupoffreediskblockscanbefoundeasily
Counting
Becausespaceisfrequentlycontiguouslyusedandfreed, withcontiguous-
allocationallocation,extents,orclustering.
Keep address of first free block and count of following free blocks. Free space
listthenhas entriescontainingaddresses andcounts.

DirectoryImplementation
1.LinearList
Inthisalgorithm,allthefilesinadirectoryaremaintainedassinglylinedlist.Eachfilecontainsthepointersto the
datablockswhichare assignedtoitandthe nextfileinthedirectory.
Characteristics
1. When a new file is created, then the entire list is checked whether the new file name is matching to
aexisting file name or not. In case, it doesn't exist, the file can be created at the beginning or at the
end.Therefore, searchingfor auniquenameisabigconcernbecausetraversingthewholelisttakestime.
2. The listneeds to be traversed in case of every operation (creation, deletion, updating, etc) on
thefilesthereforethesystemsbecomeinefficient.

2. HashTable
To overcome the drawbacks of singly linked list implementation of directories, there is an
alternativeapproachthat ishashtable.Thisapproachsuggeststouse hashtable along withthelinkedlists.
A key-value pair for each file in the directory gets generated and stored in the hash table.The key canbe
determined by applying the hash function on the file name while the key points to the
correspondingfilestoredinthedirectory.
Now, searching becomes efficient due to the fact that now, entire list will not be searched on
everyoperating.Onlyhashtableentriesarecheckedusingthekeyandifanentryfoundthenthecorrespondingfile
willbefetchedusingthevalue.
EfficiencyandPerformance

Efficiencydependenton:
● Diskallocationanddirectoryalgorithms
● Types of data kept in file’s directory
entryPerformance
● Diskcache –separatesectionofmainmemoryforfrequentlyusedblocks
● free-behindandread-ahead–techniquestooptimizesequentialaccess
● improvePCperformancebydedicatingsectionofmemoryasvirtual disk,or RAMdisk

I/OHardware: I/Odevices
Input/outputdevicesarethedevicesthatareresponsiblefortheinput/outputoperationsinacomputersystem.
Basicallytherearefollowingtwotypesofinput/outputdevices:
 Blockdevices
 Characterdevices
BlockDevices
Ablockdevicestoresinformationinblockwithfixed-sizeandown-address.
Itispossibletoread/writeeachandeveryblockindependentlyincaseofblockdevice.
Incaseofdisk,itisalwayspossibletoseekanothercylinderandthenwaitforrequiredblocktorotateunder head
without mattering where the arm currently is. Therefore, disk is a block addressable
device.CharacterDevices
Acharacterdeviceaccepts/deliversastreamofcharacterswithoutregarding toanyblock
structure.Characterdeviceisn'taddressable.
Characterdevicedoesn'thaveanyseekoperation.
Therearetoomanycharacterdevicespresentinacomputersystem
suchasprinter,mice,rats,networkinterfacesetc.Thesefourarethe commoncharacterdevices.
DeviceControllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device.OperatingSystemtakeshelpfromdevicedriverstohandle allI/Odevices.
TheDeviceControllerworkslikeaninterfacebetweenadeviceandadevicedriver.I/
Ounits(Keyboard,mouse,printer,etc.)typicallyconsistofamechanicalcomponentandanelectroniccompone
ntwhere electronic componentiscalledthe device controller.
There is always a device controller and a device driver for each device to communicate with
theOperating Systems.A device controllermay be able tohandle multiple devices. As an interface
itsmaintaskistoconvertserial bitstreamto block ofbytes,performerrorcorrectionasnecessary.
Anydevice connectedtothecomputerisconnected bya plugand socket,andthe socket isconnectedtoa
device controller. Following is a model for connecting the CPU, memory, controllers, and I/O
deviceswhere CPU anddevice controllers alluse a commonbusforcommunication.

SynchronousvsasynchronousI/O
 SynchronousI/O−InthisschemeCPUexecutionwaitswhileI/Oproceeds
 AsynchronousI/O−I/
OproceedsconcurrentlywithCPUexecutionCommunicationtoI/ODevices
TheCPUmusthaveawaytopassinformationtoandfromanI/Odevice.Therearethreeapproachesavailable
tocommunicatewiththeCPUandDevice.
 SpecialInstructionI/O
 Memory-mappedI/O
 Directmemoryaccess(DMA)
SpecialInstructionI/O
This uses CPU instructions that are specifically made for controlling I/O devices. These
instructionstypicallyallow data tobe senttoanI/OdeviceorreadfromanI/Odevice.
Memory-mappedI/O
When using memory-mapped I/O, the same address space is shared by memory and I/O devices.
Thedevice is connected directly to certain main memory locations so that I/O device can transfer block
ofdatato/frommemorywithoutgoingthroughCPU.
While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use
thatbuffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts CPU
whenfinished.
The advantage to this method is that every instruction which can access memory can be
usedtomanipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like
disks,communicationinterfaces.

DirectMemoryAccess(DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred.
Ifa fast device such as a disk generated an interrupt for each byte, the operating system would spend
mostof its time handling these interrupts. So a typical computer uses direct memory access (DMA)
hardwaretoreducethis overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write
tomemory withoutinvolvement. DMA moduleitself controls exchange of data between mainmemoryand
the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted
onlyafterentireblockhas beentransferred.
DirectMemory Accessneeds a special hardwarecalled DMA controller (DMAC) that manages thedata
transfers and arbitrates access to the system bus. The controllers are programmed with source
anddestination pointers (where to read/write the data), counters to track the number of transferred
bytes,and settings,whichincludesI/O andmemorytypes,interruptsandstatesfortheCPU cycles.
TheoperatingsystemusestheDMAhardwareasfollows−
Step Description

1 Devicedriverisinstructedtotransfer diskdatatoabuffer addressX.

2 Devicedrivertheninstructdiskcontrollertotransferdatatobuffer.

3 DiskcontrollerstartsDMAtransfer.

4 Diskcontroller sendseachbytetoDMAcontroller.

5 DMAcontrollertransfersbytestobuffer,increasesthememoryaddress,decrease
sthecounterC untilCbecomes zero.

6 WhenCbecomeszero,DMAinterruptsCPUtosignaltransfercompletion.

I/Osoftwareisoftenorganizedinthefollowinglayers−
 UserLevelLibraries−Thisprovidessimpleinterfacetotheuserprogramtoperforminputandoutput.Forex
ample,stdioisalibraryprovidedbyCand C++programminglanguages.
 KernelLevelModules−Thisprovidesdevicedrivertointeractwiththedevicecontrolleranddeviceindepe
ndentI/Omodulesusedbythe devicedrivers.
 Hardware−Thislayerincludesactualhardwareandhardwarecontrollerwhichinteractwiththedevice
drivers andmakes hardware alive.
A key concept in the design of I/O software is that it should be device independent where it should
bepossible to write programs that can access any I/O device without having to specify the device
inadvance. For example, a program that reads a file as input should be able to read a file on a floppy
disk,onahrddisk,oronaCD-ROM,without having tomodifythe programforeachdifferentdevice.
DeviceDrivers
Device drivers are software modules that can be plugged into an OS to handle a particular
device.Operating System takes help from device drivers to handle all I/O devices. Device drivers
encapsulatedevice-dependent code and implement a standard interface in such a way that code contains
device-specific register reads/writes. Device driver, is generally written by the device's manufacturer
anddeliveredalongwiththedeviceona CD-ROM.
Adevicedriver performsthefollowingjobs−
 Toacceptrequestfromthedeviceindependentsoftwareabovetoit.
 Interactwiththedevicecontroller totakeandgiveI/Oandperformrequirederrorhandling
 Makingsurethattherequestisexecutedsuccessfully
How a device driver handles a request is as follows: Suppose a request comes to read a block N. If
thedriver is idle at the time a request arrives, it starts carrying out the request immediately. Otherwise,
ifthe driver is already busy with some other request, it places the new request in the queue of
pendingrequests.

Interrupthandlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or
morespecifically a callback functions in an operating system or more specifically in a device driver,
whoseexecutionis triggeredbythereceptionofaninterrupt.
When the interrupt happens, the interrupt procedure does whatever it has to in order to handle
theinterrupt,updatesdatastructuresandwakesup processthatwaswaitingforan interruptto happen.
Theinterruptmechanismacceptsanaddress─anumberthatselectsaspecificinterrupthandlingroutine/
functionfromasmallset.Inmostarchitecture,thisaddressisanoffsetstoredinatablecalledthe interrupt vector
table. This vector contains the memory addresses of specialized interrupt handlers.Device-
IndependentI/OSoftware
Thebasicfunction
ofthedevice-independentsoftwareistoperformtheI/Ofunctionsthatarecommontoalldevicesandtoprovideau
niforminterfacetotheuser-levelsoftware.Thoughitisdifficultto
writecompletelydeviceindependentsoftwarebutwecanwritesomemoduleswhicharecommonamong allthe
devices.Followingisalistoffunctionsofdevice-independentI/OSoftware−
 Uniforminterfacingfordevicedrivers
 Devicenaming-Mnemonicnames mappedtoMajorandMinordevicenumbers
 Deviceprotection
 Providingadevice-independentblocksize
 Bufferingbecausedatacomingoffadevicecannotbestoredinfinaldestination.
 Storageallocationonblockdevices
 Allocationandreleasingdedicateddevices
 ErrorReporting
User-SpaceI/OSoftware
These are the libraries which provide richer and simplified interface to access the functionality of
thekernel or ultimately interactive with the device drivers. Most of the user-level I/O software consists
oflibrary procedures with some exception like spooling system which is a way of dealing with
dedicatedI/Odevicesinamultiprogrammingsystem.
I/OLibraries(e.g.,stdio)areinuser-spacetoprovideaninterfacetotheOSresidentdevice-independentI/O SW.
For example putchar(), getchar(), printf() and scanf() are example of
userlevelI/OlibrarystdioavailableinC programming.
KernelI/OSubsystem
Kernel I/O Subsystem is responsible to provide many services related to I/O. Following are some of
theservicesprovided.
 Scheduling − Kernel schedules a set of I/O requests to determine a good order in which to
executethem. When an application issues a blocking I/O system call, the request is placed on the queue
for thatdevice. The Kernel I/O scheduler rearranges the order of the queue to improve the overall
systemefficiencyandtheaverage responsetime experiencedbythe applications.
 Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores data
whiletheyaretransferredbetweentwodevicesorbetweenadevicewithanapplicationoperation.Bufferingis
done to cope with a speed mismatch between the producer and consumer of a data stream or to
adaptbetweendevices thathave differentdatatransfersizes.
 Caching − Kernel maintains cache memory which is region of fast memory that holds copies
ofdata.Accesstothecachedcopyismore efficientthanaccess tothe original.
 Spooling and Device Reservation − A spool is a buffer that holds output for a device, such as
aprinter, that cannot accept interleaved data streams. The spooling system copies the queued spool
filesto the printer one at a time. In some operating systems, spooling is managed by a system
daemonprocess.Inotheroperatingsystems,itis handledbyaninkernelthread.
 Error Handling − An operating system that uses protected memory can guard against many
kindsofhardwareandapplicationerrors.
UNIT –III
Threads:

1) User Threads : Thread creation, scheduling, management happen in user space


byThread Library. user threads are faster to create and manage. If a user thread performs a
systemcall, which blocks it, all the other threads in that process one also automatically blocked,
wholeprocessis blocked.

Disadvantages

 Inatypicaloperatingsystem,mostsystemcallsareblocking.
 Multithreadedapplicationcannottakeadvantageofmultiprocessing.

2) KernelThreads:kernelcreates,schedules,managesthesethreads.thesethreadsareslower,
manage.Ifonethreadinaprocessblocked, over allprocess neednotbeblocked.

Advantages
 Kernelcansimultaneouslyschedule
multiplethreadsfromthesameprocessonmultipleprocesses.
 If onethreadinaprocessisblocked,theKernelcanscheduleanotherthreadofthesameprocess.
 Kernelroutinesthemselvescanmultithreaded.

Disadvantages

 Kernelthreadsaregenerallyslowertocreateandmanagethantheuserthreads.
 Transfer of control from one thread to another within same process requires a mode switch
totheKernel.

ultithreadingModels

SomeoperatingsystemprovidesacombineduserlevelthreadandKernellevelthreadfacility.Solarisisa good
example of this combined approach. In a combined system, multiple threads within the
sameapplicationcanruninparallelonmultipleprocessorsandablockingsystemcallneednotblocktheentireproc
ess.Multithreadingmodelsarethreetypes
 Manyto manyrelationship.
 Manytoonerelationship.
 Onetoonerelationship.

ManytoManyModel

Inthismodel, manyuserlevelthreadsmultiplexestotheKernelthreadofsmallerorequalnumbers.Thenumberof
Kernelthreadsmaybespecific toeitheraparticular applicationoraparticularmachine.
Followingdiagramshowsthemanytomanymodel.Inthismodel, 36
developerscancreateasmanyuserthreadsasnecessaryandthecorrespondingKernelthreadscanruninparallelson
amultiprocessor.
ManytoOneModel

ManytoonemodelmapsmanyuserlevelthreadstooneKernellevelthread.Threadmanagement isdonein user


space. When thread makes a blocking system call, the entire process will be blocks. Only
onethreadcanaccesstheKernelatatime,somultiplethreads areunabletoruninparallelonmultiprocessors.

Iftheuserlevelthreadlibrariesareimplementedintheoperatingsysteminsuchawaythatsystemdoesnotsupportt
hemthenKernelthreadsusethe manytoonerelationshipmodes.

Oneto OneModel

Thereis onetoonerelationship
ofuserlevelthreadtothekernellevelthread.Thismodelprovidesmoreconcurrency than the many to one
model. It also another thread to run when a thread makes a
blockingsystemcall.Itsupportmultiplethreadtoexecute inparallelonmicroprocessors.

DisadvantageofthismodelisthatcreatinguserthreadrequiresthecorrespondingKernelthread.OS/
DEADLOCKS
Systemmodel:
A system consists of a finite number of resources to be distributed among a number of
competingprocesses. The resources are partitioned into several types, each consisting of some
number ofidentical instances. Memory space, CPU cycles, files, I/O devices are examples of
resource types.Ifasystemhas 2CPUs,then theresourcetypeCPU has 2instances.
A process must request a resource before using it and must release the resource after using it.
Aprocessmay requestasmany resources asitrequires tocarry outits task.The numberofresources as
it requires to carry out its task. The number of resources requested may not exceedthe total
number of resources available in the system. A process cannot request 3 printers if thesystemhas
onlytwo.
Aprocessmayutilizearesourceinthefollowingsequence:
(I) REQUEST: The process requests the resource. If the request cannotbe granted immediately(if
the resource is being used by another process), then therequesting process must wait until it
canacquiretheresource.
(II) USE: The process can operate on the resource .if the resource is a printer, the process
canprintontheprinter.
(III) RELEASE:Theprocessrelease theresource.
For each use of a kernel managed by a process the operating system checks that the process
hasrequested and has been allocated the resource. A system table records whether each resource
isfree (or) allocated. For each resource that is allocated, the table also records the process to
whichit is allocated. If a process requests a resource that is currently allocated to another process,
it canbe addedtoa queueofprocesses waitingforthis resource.
To illustrate a deadlocked state, consider a system with 3 CDRW drives. Each of 3 processes
holdsone of these CDRW drives. If each process now requests another drive, the 3 processes will
be in adeadlocked state. Each is waiting for the event “CDRW is released” which can be caused
only byone of the other waiting processes. This example illustrates a deadlock involving the same
resourcetype.
Deadlocks may also involve different resource types. Consider a system with one printer and
oneDVD drive. The process Pi is holding the DVD and process Pj is holding the printer. If Pi
requeststhe printerandPjrequeststheDVDdrive,a deadlockoccurs.
DEADLOCKCHARACTERIZATION:
In a deadlock, processes never finish executing, and system resources are tied up, preventing
otherjobsfromstarting.

NECESSARYCONDITIONS:
Adeadlocksituationcanariseifthefollowing4conditions holdsimultaneouslyinasystem:
1. MUTUALEXCLUSION:Only oneprocessatatimecan usetheresource.If anotherprocess
requests that resource, the requesting process must be delayed until theresource hasbeenreleased.
2. HOLDANDWAIT:Aprocessmustbeholdingatleastoneresourceandwaitingtoacquire
additionalresources thatarecurrentlybeingheldbyotherprocesses.
3. NOPREEMPTION:Resourcescannotbepreempted.Aresourcecanbereleasedonlyvoluntarily
bythe processholdingit,afterthatprocesshascompleteditstask.
4. CIRCULAR WAIT: A set {P0,P1,…..Pn} of waiting processes must exist such that P 0
iswaiting for resource held by P 1, P1 is waiting for a resource held by P 2,……,Pn-1 is waiting
fora resourceheldbyPnandPniswaitingforaresourceheldbyP0.
RESOURCEALLOCATIONGRAPH
Deadlocks can be described more precisely in terms of a directed graph called a
systemresourceallocationgraph.ThisgraphconsistsofasetofverticesVandasetofedgesE.thesetofvertice
sV is partitionedinto2differenttypes ofnodes:
P={P1,P2….Pn},thesetconsistingofalltheactiveprocessesinthesystem.R={R1,R2….Rm},t
he setconsistingofallresourcetypes inthe system.
AdirectededgefromprocessPitoresourcetypeRjisdenotedbyPi->Rj.ItsignifiesthatprocessPihas
requestedaninstance ofresourcetype Rjandis currentlywaitingforthatresource.
AdirectededgefromresourcetypeRjtoprocessPiisdenotedbyRj-
>Pi,itsignifiesthataninstanceofresourcetypeRjhas beenallocatedtoprocess Pi.
AdirectededgePi->Rjiscalledarequestededge.AdirectededgeRj-
>Piiscalledanassignmentedge.
We represent each process Pi as a circle, each resource type Rj as a rectangle. Since resource
typeRj mayhavemorethan oneinstance.We representeach suchinstanceas adotwithintherectangle. A
request edge points to only the rectangle R j. An assignment edge must also designateone
ofthedotsintherectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resourceallocationgraph.Whenthisrequestcanbefulfilled,therequestedgeisinstantaneouslytransforme
d to an assignment edge. When the process no longer needs access to the resource,
itreleasestheresource,asaresult,the assignmentedgeisdeleted.
ThesetsP,R,E:
P={P1, P2, P3}
R={R1,R2,R3,R4}
E={P1->R1,P2 ->R3,R1 ->P2,R2->P2,R2 ->P1,R3->P3}

Oneinstanceofresourcetype R1
TwoinstancesofresourcetypeR2Oneins
tanceofresourcetypeR3Threeinstances
of
resourcetypeR4PROCESSSTATES:
ProcessP1isholdinganinstanceofresourcetypeR2andiswaitingforaninstanceofresourcetype R1.
ProcessP2isholdinganinstanceofR1andaninstanceofR2andiswaitingforinstanceofR3.ProcessP3is
holdinganinstance ofR3.
If thegraphcontainsnocycles,thennoprocessinthesystemisdeadlocked.Ifthe
graphdoescontaina cycle,thena deadlockmayexist.
SupposethatprocessP3requestsaninstanceofresourcetypeR2.Sincenoresourceinstanceiscurrentlyavaila
ble,a requestedgeP3->R2is addedtothe graph.
2 cycles:
P1 ->R1 ->P2 ->R3 ->P3 ->R2 -
>P1P2->R3->P3->R2->P2
Processes P1, P2, P3 are deadlocked. Process P2 is waiting for the resource R3, which is held
byprocess P3.process P3 is waiting for either process P 1 (or) P2 to release resource R2. In
addition,processP1iswaitingforprocess P2torelease resourceR1.

Wealsohaveacycle: P1->R1->P3->R2 ->P1


Howeverthereisnodeadlock.ProcessP4mayreleaseitsinstanceofresourcetypeR2.Thatresource
canthenbe allocatedtoP3,breakingthe cycle.
DEADLOCKPREVENTION
Foradeadlocktooccur,eachof the4necessaryconditionsmustheld.Byensuringthatatleastone ofthese
conditionscannothold,wecanpreventthe occurrence ofadeadlock.
Mutual Exclusion– not requiredforsharable resources;mustholdfornonsharable
resources
HoldandWait–mustguaranteethatwheneveraprocessrequestsaresource,itdoes
notholdanyotherresources
o Requireprocesstorequestandbeallocatedallitsresourcesbefore it
begins execution, or allow process to request resources
onlywhentheprocess hasnone
o Lowresourceutilization;starvationpossible
NoPreemption –
o If a process that is holding some resources requests another
resourcethat cannot be immediately allocated to it, then all resources
currentlybeingheldarereleased
o Preemptedresources are added tothelist of resourcesforwhichthe
processiswaiting
o Processwill
berestartedonlywhenitcanregainitsoldresources,aswellasthe
newonesthatitisrequesting

CircularWait–imposeatotal orderingofallresourcetypes,andrequirethateach
process requests resources in an increasing order of
enumerationDeadlockAvoidance
Requiresthatthesystemhassomeadditionalaprioriinformationavailable
 Simplestandmostusefulmodelrequiresthateachprocessdeclarethemaximumnumber
ofresourcesofeachtypethatitmayneed
 Thedeadlock-avoidancealgorithmdynamicallyexaminestheresource-
allocationstatetoensurethat there canneverbeacircular-waitcondition
 Resource-allocation stateis definedby the number of available and
allocatedresources,andthemaximumdemands oftheprocesses.
SafeState
 When a process requests an available resource,systemmust
decideifimmediate allocationleaves the systemina safestate
System is in safe state if there exists a sequence <P1, P2, …, Pn> of
ALLthe processes in the systems such that for each Pi, the resources that Pi
canstill request can be satisfied by currently available resources +
resourcesheldbyallthePj,withj<I
Thatis:
o IfPiresource needsare notimmediatelyavailable,thenPi canwaituntilall
Pjhave finished
o WhenPjisfinished,Picanobtainneededresources,execute,returna
llocatedresources,andterminate
o WhenPiterminates,Pi+1canobtainitsneededresource
s,andsoonIfa system isinsafestatenodeadlocks
Ifasystemisinunsafestate possibility of
deadlockAvoidance ensure that a system will never
enter an unsafe stateAvoidance algorithms
Singleinstanceofaresourcetype
o Usearesource-allocationgraphMultipleinstancesofaresourcetype
o Usethebanker’salgorithm
Resource-AllocationGraphScheme
ClaimedgePiÆRjindicatedthatprocessPjmayrequestresourceRj;representedbya
dashedline
Claim edge converts to request edge when a process requests a
resourceRequestedgeconvertedtoanassignmentedgewhentheresourceisallocatedt
otheprocessWhenaresourceisreleasedbyaprocess,assignmentedgereconvertsto
aclaimedgeResourcesmustbeclaimeda prioriinthesystem

UnsafeStateInResource-AllocationGraph

Banker’sAlgorithm
Multipleinstances
Eachprocessmustaprioriclaimmaximumuse
Whenaprocessrequestsaresourceitmayhavetowait
When a process gets all its resources it must return them in a
finiteamountof timeLetn= numberof processes, andm =
numberofresourcestypes.
Available:Vectoroflengthm.Ifavailable [j]= k,there are k instancesofresourcetype
Rjavailable
Max:nxmmatrix.IfMax [i,j]= k,thenprocess Pimayrequestatmostk
instancesofresource typeRj
Allocation:nxmmatrix.If
Allocation[i,j]=kthenPiiscurrentlyallocatedkinstances ofRj
Need:nxmmatrix.IfNeed[i,j]=k,thenPimayneedkmoreinstancesof
Rjtocompleteitstask
Need [i,j]=Max[i,j]–Allocation[i,j]
SafetyAlgorithm
1. LetWorkandFinishbevectorsoflengthmandn,respectively.Ini
tialize:Work=Available

Finish[i]=falsefori=0,1,…,n-1
2. Findanisuchthatboth:
(a) Finish[i]=false
(b) Needi=Work
Ifnosuchiexists,gotostep4
3. Work = Work
+AllocationiFinish[i]=true
gotostep2
4. IfFinish[i] ==trueforalli,thenthesystemisinasafestate
Resource-RequestAlgorithmforProcess Pi
Request=requestvectorforprocessPi.IfRequesti[j]=kthen processPiwants
kinstancesofresourcetypeRj
1. IfRequesti£Needigotostep2.Otherwise,raiseerrorcondition,sincepr
ocesshas exceededitsmaximumclaim
2. IfRequesti£Available,gotostep3.OtherwisePimustwait,sinceresource
sarenotavailable
3. PretendtoallocaterequestedresourcestoPibymodifyingthestateas follows:
Available = Available –
Request;Allocationi= Allocationi
+
Requesti;Needi=Needi–
Requesti;
o Ifsafe theresourcesareallocatedtoPi
o Ifunsafe Pimustwait,andtheoldresource-allocationstateisrestored

ExampleofBanker’sAlgorithm(REFERCLASSNOTES)
consider5processesP0throughP4;3resourcetypes:
A(10instances),B(5instances), andC(7instances)

SnapshotattimeT0:
Allocation Max Available
ABC ABC ABC
P0010 753 332
P1200 322
P2302 902
P3211 222
P4002 433
ΣThecontent ofthematrix Needisdefined tobeMax

AllocationNeedA
BC
Thesystemisinasafestatesincethesequence<P1,P3,P4,P2,P0>

satisfiessafetycriteria

P1Request(1,0,2)
CheckthatRequest£Available(thatis,(1,0,2)£(3,3,2) true

Allocatio Need Available


n
ABC ABC ABC
P0010 743 230
P1302 020
P2302 600
P3211 011
P4002 431
Executingsafetyalgorithmshowsthatsequence<P1,P3,P4,P0,P2>satisfiessafetyrequirement
DeadlockDetection
AllowsystemtoenterdeadlockstateDetectio
nalgorithm
Recoveryscheme
SingleInstanceofEachResourceType
Maintain wait-for
graphNodesare
processes PiÆP
jifPiiswaitingforPj
Periodicallyinvokeanalgorithmthatsearchesforacycleinthegraph.Ifthereisacycle,there
exists adeadlock
An algorithm to detect a cycle in a graph requires an order of n2
operations,wherenisthenumberofverticesinthe graph
Resource-AllocationGraphandWait-forGraph

Resource-AllocationGraph Correspondingwait-forgraph

SeveralInstancesofaResourceType
Available: A vector of length m indicates the number of available
resourcesof each type.Allocation: An n x mmatrix defines the number of
resourcesofeachtypecurrentlyallocatedtoeachprocess.
Request:Annxmmatrix indicatesthecurrentrequest ofeachprocess.
IfRequest[i][j]= k,thenprocessPiisrequestingkmoreinstancesofresourcetype.Rj.
DetectionAlgorithm
LetWorkand Finishbevectorsoflengthmandn,respectivelyInitialize:
(a) Work=Available
(b) Fori=1,2,
…,n,ifAllocationiπ0,thenFinish[i]=false;othe
rwise,Finish[i] =true
2. Findanindexisuchthatboth:
(a) Finish[i]==false
(b) Requesti£Work
Ifnosuchiexists,gotostep 4
3. Work = Work +
AllocationiFinish[i]=true
gotostep2
4. IfFinish[i] ==false,for somei, 1£i£n,thenthesystemisindeadlockstate. Moreover,if
Finish[i] ==false,thenPiisdeadlocked
RecoveryfromDeadlock:
ProcessTermination
Abortalldeadlockedprocesses
Abort one process at a time until the deadlock
cycleis eliminated In which order should we choose
toabort?
o Priorityoftheprocess
o Howlongprocesshascomputed, andhowmuchlongertocompletion
o Resourcestheprocesshasused
o Resourcesprocessneeds tocomplete
o Howmanyprocesseswill needtobeterminated
o Isprocessinteractiveorbatch?
ResourcePreemption
Selectingavictim–minimizecost
Rollback–returntosomesafestate,restartprocessforthatstateStarvation –
same process may always be picked as victim, include
numberofrollbackincostfactor

Secondarystoragestructure:
Overviewofmassstoragestructure

Magnetic disks: Magnetic disks provide the bulk of secondary storage for
moderncomputer system. Each disk platter has a flat circular shape, like a CD.
Common platterdiametersrangefrom 1.8to5.25inches.Thetwosurfacesof
aplatterarecoveredwithamagneticmaterial.Westoreinformationby
itmagneticallyontheplatters.

Movingheaddiskmechanism

A read /write head files just above each surface of every platter. The heads are
attachedto a disk arm that moves all the heads as a unit. The surface of a platter is
logicallydividedinto circular tracks, which are sub divided into sectors. The set of
tracks thatare at one arm position makes up a cylinder. There may be thousands of
concentriccylindersinadiskdrive,andeachtrackmaycontainhundredsofsectors.
When the disk in use, a driver motor spins it at high speed. Most drivers rotate 60
to200 times per second. Disk speed has 2 parts. The transfer rate is the at which
dataflow between the drive and the computer. To read/write, the head must be
positionedat the desired track and at the beginning of the desired sector on the track,
the time ittakes to position the head at the desired track is called seek time. Once the
track isselected the disk controller waits until desired sector reaches the read/write
head. Thetime it takes to reach the desired sector is called latency time or rotational
dealy-access time. When the desired sector reached the read/write head, then the real
datatransferringstarts.

A disk can be removable. Removable magnetic disks consist of one platter, held in
aplasticcasetopreventdamagewhilenotin thediskdrive.Floppy disksareinexpensive
removable magnetic disks that have a soft plastic case containing a
flexibleplatter.Thestorage capacityofafloppydiskis 1.44MB.

A disk drive is attached to a computer by a set of wires called an I/O bus. The
datatransfer on a bus are carried out by special processors called controllers. The
hostcontrolleris the controllerat the computer end of the bus. A disk controlleris
builtinto each disk drive . to perform i/o operation, the host controller operates the
diskdrive hardware to carry out the command. Disk controllers have built in cache,
datatransfer at the disk drive happens b/w cache and disk surface. Data transfer at the
host,occursb/wcacheandhostcontroller.

Magnetic Tapes: magnetic tapes was used as an early secondary storage medium. It
ispermanent and can hold large amount of data. It access time is slow compared to
mainmemoryandmagneticdisks.Tapesaremainlyusedforbackup,forstorageofinfrequently
usedinformation.Typicallytheystore20GBto200GB.

Disk Structure: most disks drives are addressed as large one dimensional arrays
oflogicalblocks.The one dimensional array of logicalblocksismappedontothesectors of
the disk sequentially. sector 0 is the fist sector of the first track on theoutermost
cylinder. The mapping proceeds in order through that track, then throughthe rest of
the tracks in that cylinder, and then through the rest of the cylinder fromoutermost to
inner most. As we move from outer zones to inner zones, the number ofsectors per
track decreases. Tracks in outermost zone hold 40% more sectors
theninnermostzone.Thenumberofsectorspertrackhasbeenincreasingasdiskstechnology
improves, and the outer zone of a disk usually has several hundred sectorsper track.
Similarly, the number of cylinders per disk has been increasing; large diskshavetens
ofthousands ofcylinders.

Diskattachment
Computeraccessdiskstorageis2ways.
1. ViaI/Oports(hostattachedstorage)
2. Viaaremotehostinadistributedfilesystem(networkattachedstorage).

1 .Host attached storage : host attached storage are accessed via local I/O ports.
ThedesktoppcusesanI/ObusarchitecturecalledIDE.Thisarchitecturesupportsmaximumo
f2drivesperI/Obus.HighendworkstationandserversuseSCSIandFC.

SCSI is an bus architecture which have large number of conductor’s in a ribbon


cable(50 or 68) scsi protocol supports maximum of 16 drives an bus. Host consists of
acontrollercard(SCSIInitiator)andupto15storagedevice calledSCSItargets.

Fc(fiber channel) is the high speed serial architecture. It operates mostly on


opticalfiber (or) over 4 conductor copper cable. It has 2 variants. One is a large
switchedfabric having a 24-bit address space. The other is an (FC-AL) arbitrated
loop thatcanaddress 126devices.

Awidevarietyofstoragedevicesaresuitableforuseashostattached.(harddisk,cd
,dvd,tapedevices)

2. Network-attachedstorage:A(NAS)isaccessedremotelyoveradatanetwork
.clients accessnetworkattachedstorage via remote procedure calls.The rpc arecarried
via tcp/udp over an ip network-usually the same LAN that carries all datatraffic
totheclients.
NAS LAN/WAN CLIENT

NAS CLIENT

NAS provides a convenient way for all the computers on a LAN to share a pool
ofstorage with the same ease of naming and access enjoyed with local host
attachedstorage .butittends tobeless efficientandhavelowerperformancethan
directattachedstorage.

3. Storage area network: The drawback of network attached storage(NAS)


isstorageI/Ooperationsconsumebandwidthonthedatanetwork.Thecommunicationb/
wserversandclientscompetesforbandwidthwiththecommunicationamongserversand
storagedevices.

A storage area network(SAN) is a private network using storage protocols connecting servers
andstorage units. The power of a SAN is its flexibility. multiple hosts and multiple storage arrays
canattach to the same SAN, and storage can be dynamically allocated to hosts. SANs make it
possibleforclusters ofservertosharethesamestorage
DiskSchedulingAlgorithms

Disk scheduling algorithms are used to allocate the services to the I/O requests on
thedisk . Since seeking disk requests is time consuming, disk scheduling algorithms try
tominimize this latency. If desired disk drive or controller is available, request is
servedimmediately. If busy, new request for service will be placed in the queue of
pendingrequests. When one request is completed, the Operating System has to choose
whichpending request to service next. The OS relies on the type of algorithm it needs
whendealingandchoosingwhatparticulardiskrequestistobeprocessednext.Theobjective of
using these algorithms is keeping Head movements to the amountaspossible. The less
the head to move, the faster the seek time will be. To see how itworks, the different
disk scheduling algorithms will be discussed and examples are
alsoprovidedforbetterunderstandingonthesedifferentalgorithms.

1. FirstComeFirstServe(FCFS)

It is the simplest form of disk scheduling algorithms. The I/O requests are served
orprocesses according to their arrival. The request arrives first will be accessed
andserved first. Since it follows the order of arrival, it causes the wild swings from
theinnermost to the outermost tracks of the disk and vice versa . The farther the
locationof the request being serviced by the read/write head from its current location,
thehighertheseektimewillbe.

Example:Given thefollowingtrack requestsin thediskqueue,computefor


theTotalHeadMovement(THM)oftheread/writehead:

95, 180, 34,119, 11, 123,62, 64

Considerthattheread/
writeheadispositionedatlocation50.Priortothistracklocation199wasserviced.Showthetotalhea
dmovementfora 200trackdisk(0-199).
Solution:
TotalHeadMovementComputation:(THM)=

(180-50)+ (180-34)+(119-34)+(119-11)+(123-11)+(123-62)+(64-62)=

130+146+85+108+112+61+2(THM)=644tracks

Assumingaseekrateof
5millisecondsisgiven,wecomputefortheseektimeusingtheformula:SeekTime=
THM*Seekrate
=644*5 ms
SeekTime=3,220ms.

2. ShortestSeekTimeFirst(SSTF):

This algorithm is based on the idea that that he R/W head should proceed to the
trackthat is closest to its current position . The process would continue until all the
trackrequests are taken care of. Using the same sets of example in FCFS the solution
are asfollows:
Solution:

(THM)=(64-50) +(64-11) +(180-11) =

14+53+169(THM)=236tracks

Seek Time= THM* Seekrate

=236 * 5ms
SeekTime=1,180ms
In this algorithm, request is serviced according to the next shortest distance. Starting
at50, the next shortest distance would be 62 instead of 34 since it is only 12 tracks
awayfrom 62 and 16 tracks away from 34 . The process would continue up to the last
trackrequest.Thereareatotalof236 tracksandaseek timeof1,180ms,whichseemstobe

a better service compared with FCFS which there is a chance that starvation3
wouldtake place. The reason for this is if there were lots of requests closed to each
other, theotherrequestswillneverbe handledsince thedistance willalwaysbegreater.

3. SCANSchedulingAlgorithm

This algorithm is performed by moving the R/W head back-and-forth to the


innermostand outermost track. As it scans the tracks from end to end, it process all the
requestsfound in the direction it is headed. This will ensure that all track requests,
whether inthe outermost,middleorinnermostlocation,willbetraversedby
theaccessarmthereby finding all the requests. This is also known as the Elevator
algorithm. Using thesame setsofexampleinFCFSthe solutionare asfollows:

Solution:

This algorithm works like an elevator does. In the algorithm example, it scans
downtowards the nearest end and when it reached the bottom it scans up servicing
therequests that it did not get going down. If a request comesin after it has
beenscanned, it will not be serviced until the process comes back down or moves
back up.This process moved a total of 230 tracks and a seek time of 1,150. This is
optimalthantheprevious algorithm.

4.CircularSCAN(C-SCAN)Algorithm

This algorithm is amodified version of the SCAN algorithm. C-SCAN sweeps


thediskfromend-to-end,butassoonitreachesoneoftheendtracksitthenmovesto the

other end track without servicing any requesting location. As soon as it reaches
theother end track it then starts servicing and grants requests headed to its direction.
Thisalgorithm improves the unfair situation of the end tracks against the middle
tracks.UsingthesamesetsofexampleinFCFSthesolutionareas

follows:

Notice that in this example an alpha3 symbol (α) was used to represent the dash
line.This return sweeps is sometimes given a numerical value which is included in
thecomputation of the THM . As analogy, this can be compared with the carriage
returnlever of a typewriter. Once it is pulled to the right most direction, it resets the
typingpoint to the leftmost margin of the paper . A typist is not supposed to type during
themovement of the carriage return lever because the line spacing is being adjusted .
Thefrequent use of this lever consumes time, same with the time consumed when the
R/Wheadis resettoitsstartingposition.

Assumethatinthisexample,αhasavalueof20ms,thecomputationwouldbe
asfollows:(THM)= (50-0)+(199-62)+α
= 50+137+20(THM)

=207tracks

Seek Time= THM* Seekrate

=187 * 5msSeek Time=935 ms.

Thecomputation oftheseektimeexcludedthealphavaluebecauseitisnotan actualseek


orsearchofa diskrequestbutaresetoftheaccessarmtothe startingposition.
Diskmanagement

Disk formatting: A magnetic disk is a blank slate. It is just a platter of a


magneticrecording material. before a disk can store data , it must be divided into
sectors thatthe disk controller can read and write. This process is called low level
formatting(or)physical formatting. low level formatting fills the disk with a special
data structurefor each sector .the Data structure for a sector typically consists of a
header, a
dataarea,atrailer.theheaderandtrailercontaininformationusedbythediskcontroller
,such as a sectornumberand anerrorcorrecting code(ECC).When the controllerwrites a
sector of data during normal I/O, the ECC is updated with a value calculatedfrom all
thebytesin the data area .when the sector is read ,the ECC is
recalculatedandcomparedwiththestoredvalue.Ifthestoredandcalculatednumbersarediffe
rent,thismismatchindicatesthatthedataareaof thissectorhasbecomecorrupted, and that
the disk sector may be bad. ECC containsenough information, ifonly fewbits of data
have been corrupted, to enable the controller to identify whichbits have changed and
calculate what their correct values should be. The controllerautomatically does the
ECC processing whatever a sector is read/written for manyhard disks, when the disk
controlleris instructedto low level format the disk, it canalso be told how many bytes
of dataspace to leave betweenthe header and trailer ofallsectors.
Before it can use a disk to hold files , OS still needs to record its own data
structureson the disk. It does in 2 steps. The first step is to partition the disk in to
one/moregroups of cylinders. OS can treat each partition as a separate disk. The
second step islogical formatting (or)creation of file system. In this step, OS storesthe
initial Filesystem data structures onto the disk. These data structures include maps of
free andallocate spaceandinitialemptydirectory.

Bootblock:-

When a computer is powered up -it must have an initial program to run. This
initialbootstrap program initializes all aspects of the system, from CPU registers to
devicecontrollers, and the contents of main memory, and then starts the OS. To do its
job, thebootstrap program finds the OS kernel on disk,loads that kernel into memory
andjumpstoaninitialaddresstobegintheOSexecution.Formostcomputers,thebootstrap
isstored in ROM.Thislocationisconvenient,becauseROMneedsnoinitialization and is at
a fixed location that the CPU can start executing when poweredup, ROM is read only,
it cannot be infected by computer virus. The problem is thatchanging this bootstrap
code requires changing the ROM hardware chips. For thisreason, most systems store a
tiny bootstrap loader program in the boot ROM whose jobis to bring in a full bootstrap
program from disk. The full bootstrap program is stored
inthebootblocksatafixedlocationonthedisk.Adiskthathasabootpartitioniscalled
a boot disk or system disk. The code in the boot ROM instructs the disk controller
toreadthe bootblocksintomemoryandthenstarts executingthatcode.

Badblocks:-

A Block in the disk damaged due to the manufacturing defect or virus or


physicaldamage. This defector block is called Bad block. MS-DOS format command,
scans thedisk to find bad blocks. If format finds a bad block, it tells the allocation
methods not touse that block. Chkdsk program search for the bad blocks and to lock
them away. Datathatresided onthebad
blocksusuallyarelost.TheOStriestoreadlogicalblock 87.
ThecontrollercalculatesECCandfindsthatthesectorisbad.Itreportsthis findingtothe OS.
The next time the system is rebooted, a special commandis run to tell
theSCScontrollertoreplace thebadsector
withaspare.
After that, whenever the system requests logical block 87, the request is translated
intothe replacementsectorsaddress bythecontroller.

Sectorslipping:-

Logical block 17 becomes defective and the first available spare follows sector
202.Then, sector slipping remaps all the sectors from 17 to 202, sector 202 is copied
intothe spare, then sector 201 to 202, 200 to 201 and so on. Until sector 18 is copied
intosector19.Slippingthesectorsinthis wayfreesupthe space ofsector18.

Swapspacemanagement:-

System that implements swapping may use swap space to hold an entire
processimage, including the code and data segments. Paging systems may simply
store pagesthat have been pushed out of main memory. Note that it may be safer to
overestimatethan to underestimate the amount of swap space required, because if a
system runs outof swap space it may be forced to abort processes. Overestimation
wastes disk spacethat could otherwise be used for files, butit does no other harm.
Some systemsrecommend the amount to be set aside for swap space. Linux has
suggested settingswap space to double the amount of physical memory. Some OS
allow the use ofmultiple swap spaces. These swap spaces as put on separate disks so
that load placedon the (I/O) system by paging and swapping can be spread over the
systems I/Odevices.
Swap space location:-

A Swap space can residein one of two places. It can be carved out of normal filesystem
(or) it can be in a separate disk partition. If the swap space is simply a large file,within
the file system, normal file system methods used to create it, name it, allocate
itsspace.Itiseasytoimplementbutinefficient.Externalfragmentationcangreatlyincrease
swapping times by forcing multiple seeks during reading/writing of a processimage.
We can improve performance by caching the block location information in
mainmemory andby using special tools toallocatephysically contiguous blocksfor
theswapfile.Alternatively,swap space can be created in a separate raw
partition.aseparate swapspacestorage managerisusedtoallocate
/deal locate the blocks from the raw partition. this manager uses algorithms
optimizedfor speed rather than storage efficiency. Internal fragmentation may increase
but it isacceptable because life of data in swap space is shorter than files. since swap
space isreinitialized at boot time, any fragmentation is short lived. the raw partition
approachcreates a fixed amount of swap space during disk partitioning adding more
swap spacerequireseitherrepartitioningthedisk (or)adding another swap
spaceelsewhere.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy