0% found this document useful (0 votes)
20 views104 pages

System Software & Operating System

The document provides an overview of system software and operating systems, detailing various components such as loaders, linkers, compilers, and virtual storage management. It categorizes software into freeware, shareware, open-source, and closed-source, and discusses their characteristics and examples. Additionally, it introduces the Simplified Instructional Computer (SIC) architecture, its memory, registers, instruction formats, and the functions of loaders and linkers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views104 pages

System Software & Operating System

The document provides an overview of system software and operating systems, detailing various components such as loaders, linkers, compilers, and virtual storage management. It categorizes software into freeware, shareware, open-source, and closed-source, and discusses their characteristics and examples. Additionally, it introduces the Simplified Instructional Computer (SIC) architecture, its memory, registers, instruction formats, and the functions of loaders and linkers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

43A – SYSTEMSOFTWARE AND OPERATINGSYSTEM

Unit – 1: INTRODUCTION TO SYSTEMSOFTWARE

Introduction – System Software and machine architecture. Loader and Linkers: Basic
Loader Functions – Machine dependent loader features – Machine independent loader features –
Loader design options.

Unit – 2: MACHINEANDCOMPILER

Machine dependent compiler features – Intermediate form of the program –Machine


dependent code optimization – Machine independent compiler features – Compiler design
options – Division Into passes – Interpreters – p-code compilers – Compiler-compilers.

Unit – 3: OPERATINGSYSTEM

What is an Operating System? – Process Concepts: Definition of Process – Process States


– Process States Transition – Interrupt Processing – Interrupt Classes – Storage Management:
Real Storage: Real Storage Management Strategies – Contiguous versus Non-contiguous storage
allocation – Single User Contiguous Storage allocation – Fixed partition multi programming –
Variable partition multiprogramming.

Unit – 4: VIRTUAL STORAGE

Virtual Storage: Virtual Storage Management Strategies – Page Replacement Strategies –


Working Sets – Demand Paging – Page Size. Processor Management: Job and Processor
Scheduling: Preemptive Vs Non-preemptive scheduling – Priorities – Dead line scheduling.

Unit – 5: DEVICEANDINFORMATION MANAGEMENT

Device and Information Management Disk Performance Optimization: Operation of


moving head disk storage – Need for disk scheduling – Seek Optimization.
File and Database Systems: File System – Functions – Organization – Allocating and freeing
space – File descriptor – Access control matrix.
Unit – 1: INTRODUCTION TO SYSTEMSOFTWARE

Software

Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. It is the opposite of hardware, which describes the physical aspects of a computer.
Software is a generic term used to refer to applications, scripts and programs that run on a
device.

Classifications of Software

1. Freeware

Freeware software is available without any cost. Any user can download it from the
internet and use it without paying any fee. However, freeware does not provide any liberty for
modifying the software or charging a fee for its distribution.

Examples are Adobe Reader, Audacity, ImgBurn, Recuva, Skype, Team Viewer and Yahoo
Messenger.

2. Shareware

It is software that is freely distributed to users on a trial basis. It usually comes with a
time limit and when the time limit expires, the user is asked to pay for the continued services.
There are various types of shareware like Adware, Donationware, Nagware, Freemium, and
Demoware (Crippleware and Trialware).
Some examples of shareware are Adobe Acrobat, Getright, PHP Debugger and Winzip.

3. Open Source Software (OSS)

These kinds of software are available to users with the source code which means that a
user can freely distribute and modify the software and add additional features to the software.
Open-Source software can either be free or chargeable.

Some examples of open-source software are Apache Web Server, GNU Compiler
Collection, Moodle, Mozilla Firefox and Thunderbird.

4. Closed Source Software (CSS) or Proprietary Software

They are also known as Closed-source software. These types of applications are usually
paid and have intellectual property rights or patents over the source code. The use of these is
very restricted and usually, the source code is preserved and kept as a secret.
Some examples of closed source software are Skype, Google earth, Java, Adobe Flash,
Virtual Box, Adobe Reader, Microsoft office, Microsoft Windows, WinRAR, mac OS and
Adobe Flash Player.

Types of Software

1. Application Software

They focus on an application or problem to be solved. Application software provides


solution to a problem. Examples are Word Processor, Spreadsheet, Database, Facebook, VLC
Media Player, Educational Software, Oracle, Dictionaries and Chrome.

Word processing, Database, Spreadsheet, Web browsers, Multimedia, Presentation,


Enterprise, Graphics, Communication, Education and Application suites.

2. System Software

They consist of a variety of programs that support the operation of a computer. System
software supports operation and use of computer. Examples for System Software are Operating
System, Compiler, Assembler, Macro Processor, Loader or Linker, Debugger, Text Editor and
Software Engineering Tools.

Operating systems, Device driver, Firmware, Programming language translator and


Utilities.
The Software is set of instructions or programs written to carry out certain task on digital
computers. It is classified into system software and application software. System software
consists of a variety of programs that support the operation of a computer. Application software
focuses on an application or problem to be solved.

Examples for system software are Operating system, compiler, assembler, macro
processor, loader or linker, debugger, text editor, database management systems (some of them)
and, software engineering tools. These software’s make it possible for the user to focus on an
application or other problem to be solved, without needing to know the details of how the
machine works internally.

SYSTEM SOFTWARE AND MACHINE ARCHITECTURE

One characteristic in which most system software differs from application software is
machine dependency.

System software supports operation and use of computer. Application software provides
solution to a problem. Assembler translates mnemonic instructions into machine code. The
instruction formats, addressing modes etc., are of direct concern in assembler design. Similarly,

Compilers must generate machine language code, taking into account such hardware
characteristics as the number and type of registers and the machine instructions available.
Operating systems are directly concerned with the management of nearly all of the resources of a
computing system.

There are aspects of system software that do not directly depend upon the type of
computing system, general design and logic of an assembler, general design and logic of a
compiler and code optimization techniques, which are independent of target machines. Likewise,
the process of linking together independently assembled subprograms does not usually depend
on the computer being used.

THE SIMPLIFIED INSTRUCTIONAL COMPUTER (SIC)

This machine has been designed to illustrate the most commonly encountered hardware
features and concepts, while avoiding most of the peculiarity that are often found in real
machines.

SIC comes in two versions:

1. The standard model


2. An XE version (Extra Equipment or Extra Expensive)

The two versions have been designed to be upward compatible – that is, an object
program for the standard SIC machine will also execute properly on a SIC/XE system.
SIC MACHINE ARCHITECTURE
MEMORY
• Memory consists of 8-bit bytes, any 3 consecutive bytes form a word.(24 bits)
• All addresses on SIC are byte addresses, words are addressed by the location of their
lowest numbered byte.
• There are a total of 32,768 bytes in the computer memory.

REGISTERS
• There are 5 registers, all of which have special uses.
• Each register is 24 bits in length.

Mnemonic Number Special use


Accumulator, used for
A 0
arithmetic operations
Index register, used for
X 1
addressing
Linkage register, the Jump to
L 2
subroutines
Program counter, contains
the address of the next
PC 8
instruction to be fetched for
execution
Status word, contains a
variety of information,
SW 9
including a Condition Code
(CC).

DATA FORMATS
• Integers are stored as 24-bit binary numbers, 2’s complement representation is used for
negative values.
• Characters are stored using their 8-bit ASCII codes.
• There is no floating-point hardware on the standard version of SIC.

INSTRUCTION FORMATS
• All machine instructions on the standard version of SIC have the following 24-bit format:
• The flag bit x is used to indicate indexed-addressing mode.

8 1 15
Opcode X Address

ADDRESSING MODES

• There are 2 addressing modes available by setting the x value.

Mode Indication Target address calculation


Direct X=0 TA=address
Indexed X=1 TA=address + (X)

2
1. Direct addressing mode

LDA TEN

0000 0000 0 001 0000 0000 0000


0 0 1 0 0 0
Opcode x TEN

Effective address (EA) = 1000


Content of the address 1000 is loaded to Accumulator.

2. Indexed addressing mode

STCH BUFFER, X

0101 0100 1 001 0000 0000 0000


5 4 1 0 0 0
Opcode x BUFFER

Effective address (EA) = 1000 + [x]


= 1000 + content of the index register X.
• The accumulator content, the character is loaded to the effective address.

INSTRUCTION SET
• This includes instruction that load and store register.
LDA – load accumulator
LDX – load index register
STA – store accumulator
STX – store index register
• It also includes integer arithmetic instructions ADD, SUB, MUL, DIV.
• All arithmetic operations involve register A and a word in memory, with the result being
left in the register.
• It also includes an instruction COMP that compares the value in register A with a word in
memory.
• It also includes jump instructions like,
JLT - less than
JEQ – equal
JGT – greater than.
• The two instructions are provided for subroutine linkage. They are,
1. JSUB – jumps to subroutine
2. RSUB – returns to subroutine.

INPUT AND OUTPUT


• Input and output are performed by transferring 1 byte at a time to or from the rightmost 8
bits of register.
• Each device is assigned a unique 8-bit code.
• There are 3 I/O instructions, each of which specifies the device code as an operand.

3
1. Test Device(TD):
• This instruction tests whether the addressed device is ready to send or receive
a byte of data.
• The condition code is set to indicate the result of this test.
• A setting of < means the device is ready to send or receive and + means the
device is not ready)
2. Read Data(RD)
3. Write Data(WD)
• A program needing to transfer data or receive, wait until the device is ready, then execute
a read data or write data.

SIC/XE MACHINE ARCHITECTURE

MEMORY
• The maximum memory available is 1 megabyte.
• This increase leads to a change in instruction formats and addressing modes.

REGISTERS

Mnemonic Number Special use


Base register, used for
B 3
addressing
S 4 General working register
T 5 General working register
Floating-point
F 6
accumulator(48 bits)

DATA FORMATS
• The data format is same as standard SIC version.
• In addition is a 48 bit floating-point data type and the format is
1 11 36
S exponent Fraction
• The fraction lies between 0 and 1.
• The exponent is an unsigned binary number lies between 0 and 2047.
• There is a 48-bit floating-point data type, F*2(e-1024)

Instruction Formats:
• The new set of instruction formats from SIC/XE machine architecture is as follows.
• Format 1 (1 byte): contains only operation code (straight from table).
8
OP
• Format 2 (2 bytes): first eight bits for operation code, next four for register 1 and
following four for register 2.
8 4 4
OP R1 R2

4
• Format 3 (3 bytes): First 6 bits contain operation code, next 6 bits contain flags, last 12
bits contain displacement for the address of the operand. Operation code uses only 6 bits,
thus the second hex digit will be affected by the values of the first two flags (n and i). The
flags, in order, are: n, i, x, b, p, and e. The last flag e indicates the instruction format.

6 1 1 1 1 1 1 12
OP n i x b p e disp

• Format 4 (4 bytes): same as format 3 with an extra 2 hex digits (8 bits) for addresses
that require more than 12 bits to be represented.
6 1 1 1 1 1 1 20
OP n i x b p e Disp

Addressing mode
• Two new relative addressing modes are available for use with instructions assembled
using format 3.

Mode Indication Target address calculation


Base relative b=1, p=0 TA=(B)+disp
Program-counter relative b=0, p=1 TA=(pc)+disp

Instruction Set
• SIC/XE provides all of the instructions that are available on the standard version.
• In addition we have, Instructions to load and store the new registers LDB, STB, etc,
• Floating- point arithmetic operations, ADDF, SUBF, MULF, DIVF, Register move
instruction : RMO Register-to-register arithmetic operations, ADDR, SUBR, MULR,
DIVR and, Supervisor call instruction : SVC generates an interrupt that can be used for
communication with the OS.

Input and Output:


• There are I/O channels that can be used to perform input and output while the CPU is
executing other instructions.
• Allows overlap of computing and I/O, resulting in more efficient system operation.
• The instructions SIO, TIO, and HIO are used to start, test and halt the operation of I/O
channels.

LOADRES AND LINKERS

• Loading which brings the object program into memory for execution.
• Relocation which modifies the object program so that it can be loaded at an address.
• Linking which combines two or more separate object programs and supplies the
information needed to allow references between them.
LOADRES AND LINKERS

• A loader is a system program that performs the loading function.


• Many loaders also support relocation and linking.
• Some systems have a linker or linkage editor to perform the linking operation and a
separate loader to handle relocation and loading.
• Here, we often use the term loader in place of loader and (or) linker.
5
BASIC LOADER FUNCTIONS

• The most fundamental function of a loader is bringing an object program into memory
and starting its execution.
Design of an absolute loader
• This loader does not need to perform functions like linking and program relocation.
• All operations are done in a single pass.

• The header record is checked to verify that the correct program has been presented for
loading.
• When each text record is read, the object code from the test record is moved to the
indicated address in memory.
• When the end record is encountered the loader jumps to the specified address to begin
execution of the loader program.

6
• The above figure shows a representation of the program from figure (a) after loading,
• The contents of memory locations for which there is no text record are shown as xxx.

Absolute loader algorithm


Begin
read Header record
verify program name and length
read first Text record
while record type is <> ‘E’ do
begin
{if object code is in character form, convert into internal representation}
move object code to specified location in memory
read next object program record
end
jump to address specified in End record
end

• Although the above process is extremely simple, we have to consider the following
points.
1. Our object program is stored in hexadecimal format (ie) each byte of assembled code
is given using its hexadecimal representation in character form.
Ex: STL instruction is represented by pair of characters “1” & “4”. When loader read
this instructions it occupy two bytes of memory. During loading, it is stored as a
single byte with hexadecimal value 14.
2. The above method of representing an object program is inefficient in terms of both
space and execution time.
3. Therefore, most machines store object program in a binary form.

A simple bootstrap loader


• When a computer is first turned on or restarted, a special type of absolute loader, called
bootstrap loader is executed.
• This bootstrap loads the first program to be run by the computer -- usually an operating
system.
• The bootstrap itself begins at address 0. It loads the OS starting address 0x80.
• No header record or control information, the object code is consecutive bytes of
memory.

7
• The below source code is divided into 3 sections.
1. Header section.
2. Loop.
3. Subroutine - GETC
• The bootstrap reads object code from device F1 and enters into memory starting at
address 80.
• After all the code from device F1 has been entered into memory, the bootstrap executes a
jump to address 80 to begin execution of the program just loaded.

1. Header section:
• The bootstrap itself begins at address 0 in the memory of the machine.
• It loads the operating system starting at address 80 (Hexadecimal).

2. Loop section:
• The bootstrap reads object code from device F1 and enter into memory starting at
address 80.
• After all the object code from device F1 has been loaded, the bootstrap execute a
jump to address 80 to begin execution of the program that was loaded.
• Register X contain the address of the next memory location is loaded.

3. Subroutine – GETC:
• GETC is used to reads one character from device F1 and converts it from the
ASCII character code to value of the hexadecimal digit.
• Ex: The ASCII code for the character “0” (hexadecimal 30) is converted to the
numeric value 0.
• Likewise, ASCII codes for “1” through “9” (hexadecimal 31 through 39) are
converted to the numeric values 1 through 9 and the codes for “A” through “F”
(hexadecimal 41 through 46) are converted to the values 10 through 15.
• The subroutine GETC jumps to address 80 when an end-of-file (hexadecimal 04)
is read from device F1.

8
BOOT START 0 BOOTSTRAP LOADER FOR SIC/XE
.
.

LOOP CLEAR A CLEAR REGISTER A TO ZERO


LDX #128 INITIALIZE REGISTER X TO HEX 80
JSUB GETC READ HEX DIGIT FROM PROGRAM
BEING LOADED
RMO A,S SAVE IN REGISTER S
SHIFTL S,4 MOVE TO HIGH ORDER 4 BITS OF
BYTE
JSUB GETC GET NEXT HEX DIGIT
ADDR S,A COMBINE DIGITS TO FORM ONE BYTE
STCH O,X STORE AT ADDRESS IN REGISTER X
TIXR X,X ADD 1 TO MEMORY ADDRESS BEGIN
LOADED
J LOOP LOOP UNTIL END OF INPUT IS
REACHED
GETC TD INPUT TEST INPUT DEVICE
JEQ GETC LOOP UNTIL READY
RD INPUT READ CHARCTER
COMP #4 IF CHARACTER IS HEX 04
(END OF FILE)
JEQ 80 JUMP TO START OF PROGRAM JUST
LOADED
COMP #48 COMPARE TO HEX 30
(CHARACTER ‘0’)
JLT GETC SKIP CHARACTERS LESS THAN ‘0’
SUB #48 SUBTRACT HEX 30 FROM ASCII CODE
COMP #10 IF RESULT IS LESS THAN 10,
CONVERSATION IS
JLT RETURN COMPLETE. OTHERWISE,
SUBTRACT 7 MORE
SUB #7 (FOR HEX DIGITS ‘A’ THROUGH ‘F’)
RETURN RSUB RETURN TO CALLER
INPUT BYTE X ‘F1’ CODE FOR INPUT DEVICE
END LOOP

9
MACHINE DEPENDENT LOADER FEATURES
• In this section we consider the design and implementation of a more complex loader that
is used on a SIC/XE version.
• This loader provides for program relocation and linking and also for the simple loading
function.
RELOCATION
• The need for program relocation is an indirect reason for the change to larger and more
powerful computer.
• The way relocation is implemented in a loader is also dependent upon machine
characteristics.
• Loaders that allow for program relocation are called relocating loaders or relative loaders.
• The two methods for specifying relocation are:
1. Relocation by modification records.
2. Relocation by bit mask
Relocation by modification records
• A modification record is used to describe each part of the object that must be changed
when the program is relocated.
Modification record
col 1: M
col 2-7: Starting address of the field to be modified, relative to the beginning of the
control section (hexadecimal).
col 8-9: Length of the field to be modified, in half bytes (hexadecimal)
col 10: Modification flag (+/-)
col 11-17: External symbol whose value is to be added to or subtracted from the indicated
field.
• The following SIC/XE program is used for specifying relocation.
Line loc Source Statement object code
5 0000 COPY START 0
10 0000 FIRST RETADR 17202D
. . .
. . .
15 0006 CLOOP +JSUB RDREC 4B101036
. . .
. . .
35 0013 +JSUB WRREC 4B10105D
. . .
. . .
65 0026 +JSUB WRREC 4B10105D
. . .
. . .
115 SUBROUTINE TO READ RECORD INTO BUFFER
. . .
125 1036 RDREC CLEAR X B410
. . .
. . .
200 SUBROUTINE TO WRITE RECORD INTO BUFFER
. . .
. . .
210 105D WRREC CLEAR X B410
10
• Most of the instruction in the above program use relative or immediate addressing.
• The instruction on lines 15, 35, 65 contains actual addresses (instructions are extended
format) whose values are affected by relocation.
• The following is an object program corresponding to the above source program.

• There is one modification record for each instruction that must be changed during
execution. ( 3 modification record for instruction in line 15, 35, 65).
• Each modification record specifies the starting address and length of the filed whose
value is to be altered.
• In the above example, all modifications add the value of the symbol COPY, which
represents the starting address of the program.
Drawbacks:

• Some SIC machine does not use relative addressing.


• That use mainly direct addressing.
• When the program instruction must be modified.
• This required many modification records.
• Automatically object program is relocated all the direct addressing size will be large.
Relocation by bitmask

• Relocation by bitmask technique is mainly used to a machine that primarily uses direct
addressing and has a fixed instruction format.
• The standard SIC program is used for this method.
• The below figure shows the object program with relocation by bitmask.

11
• Here, there is no modification records.
• Text record contain relocation bit associated with each word of object code.
• All SIC instruction occupy one word.
• So one relocation bit for each possible instruction.
• The relocation bits are gathered together into a bit mask.
• In the above figure mask is represented in character form as three hexadecimal digits.
• These characters are underlined.
• A bit value of 0 indicates that no modification is necessary.
• A bit value of 1 indicates, the programs starting address is to be added to the instruction
when the program is relocated.
• In the above example, the bit mask FFC in the first Text Record specifies that all 10
words (instruction) of object code are modified during relocation.
• The mask E00 in the second test record specifies that the first three words are to be
modified.
FFC – 1111 1111 1100 – total 10 words
E00 – 1110 00 – total 3 words
• The other text record follows the same pattern.

PROGRAM LINKING

• In this section we are going to see complex examples of external references between
programs and examine the relationship between relocation and linking

• Consider the following 3 separate program each consists of a single control section.

12
Loc Source statement Object code
0000 PROGA START 0
EXTDEF LISTA, ENDA
EXTREF LISTB, ENDB, LISTC,ENDC
.
.
0020 REF1 LDA LISTA 03201D
0023 REF2 +LDT LISTB+4 77100004
0027 REF3 LDX #ENDA-LISTA
.
.
0040 LISTA EQU *
.
.
0054 ENDA EQU *
END REF1.

13
Loc Source statement Object code
0000 PROGB START 0
EXTDEF LISTA, ENDB
EXTREF LISTA, ENDA, LISTC,ENDC
.
.
0020 REF1 LDA LISTA 03100000
0023 REF2 +LDT LISTB+4 772027
0027 REF3 LDX #ENDA-LISTA 05100000
.
.
0060 LISTB EQU *
.
.
0070 ENDB EQU *
END REF2.

Loc Source statement Object code


0000 PROGC START 0
EXTDEF LISTC, ENDC
EXTREF LISTA, ENDA, LISTB,ENDB
.
.
0020 REF1 LDA LISTA 03100000
0023 REF2 +LDT LISTB+4 77100004
0027 REF3 LDX #ENDA-LISTA 05100000
.
.
0030 LISTC EQU *
.
.
0042 ENDC EQU *
END REF3.

• Each program contains a list of items LISTA, LISTB and LISTC.


• End of the lists are marked by the labels ENDA, ENDB and ENDC.
• REF1, REF2 are references to the external symbols.
REFERENCES REF1, REF2 AND REF3:

1. Take first reference REF1::


• In the first program (PROGA), REF1 is simply a reference to a label within the
program.
• No modification for relocation or linking is necessary.
• In PROGB, REF1 refers to an external symbol.
• Here extended format instruction is used (+sign).
• Modification record and linking list necessary.
• In PROGC, REF1 is handled in the same ways as PROGB.

14
2. Take REF2 & REF3:
• In PROGA, REF2 & REF3 refers to external symbol.
• Modification and linking necessary.
• In PROGB, REF2 refers to local reference.
3. No modification and linking:
• In PROGB, REF1 & REF3 refers to external symbols.
• In PROGC, REF3 is a immediate operand whose value is to be the difference
between ENDA & LISTA.

Algorithm and data structure for a linking loader

• Linking loader usually makes two passes over its input.


Pass 1: Assign addresses to all external symbols
Pass 2: Perform the actual loading, relocation, and linking
Data structure
• ESTAB (External Symbol Table)
• Two Variables
o PROGADDR - Program Load Address
o CSADDR – Control Section Address
ESTAB:
1. Used to store the name & address of each external symbol in the control section being
loaded.
2. The table also indicates in which control section the symbol is designed.

CONTROL SYMBOL NAME ADDRESS LENGTH


SECTION
PROGA 4000 0063
LISTA 4040
ENDA 4054
PROGB 4063 007F
LISTB 40C3
END3 40D3
PROGC 40E2 0051
LISTC 4112
ENDC 4124

PROGADDR:
• It is the beginning address in memory where the linked program is to be loaded.
• Its value is supplied to the loader by the operating system.
CSADDR:
• It contains starting address assigned to the control section currently being scanned by the
loader.
• This address is added to all relative addresses within the control section to convert them
to actual addresses.

15
PASS 1 ALGORITHM:

Begin
get PROGADDR from operating system
set CSADDR to PROOADDR {for first control section}
while not end of input do
begin
read next input record {Header record for control section}
set CSLTH to control section length
search ESTAB for control section name
if found then
set error flag {duplicate external symbol}
else
enter control section name into ESTAB with value CSADDR
while record type ~ 'E' do
begin
read next input record
if record type = 'D' then
for each symbol in the record do
begin
search ESTAB for symbol name
if found then
set error flag (duplicate external symbol)
else
enter symbol into ESTAB with value(CSADDR +
indicated address)
end {for}
end {while ~ 'E'}
add CSLTH to CSADDR {starting address for next control section}
end {while not EOF}
end {Pass 1}

• During pass1 the loader use only Header & Define record types in the control sections.
• The beginning address (PROGADDR) becomes the starting address (CSADDR) for the
first control section in the input sequence.
• The control section name from the header record and all external symbols from the define
record are entered into ESTAB.
• When the End record is read, the control section length CSLTH is added to CSADDR.
• This calculation gives the starting address for the next control section.

16
PASS 2 ALGORITHM:

Begin
set CSADDR to PROOADDR
set EXECADDR to PROOADDR
while not end of input do
begin
read next input record {Header record}
set CSLTH to control section length
while record type != 'E' do
begin
read next input record
if record type = 'T' then
begin
{if object code is in character form, convert
into internal representation}
move object code from record to location
(CSADDR + specified address)
end {if 'T'}
else if record type = 'M' then
begin
search ESTAB for modifying symbol name
if found then
add or subtract symbol value at location
(CSADDR + specified address)
Else
set error flag (undefined external symbol )
end {if 'M' }
end {while != 'E'}
if an address is specified {in End record} then
set EXECADDR to (CSADDR + specified address)
add CSLTH to CSADDR
end {while not EOF}
jump to location given by EXECADDR {to start execution of loadedprogram)
end {Pass 2}

• Pass2 of the loader performs the actual loading, relocation and linking of the program.
• As each text record is read, the object code is moved to the specified address.
• When a modification record is encountered, the symbol whose value is to be used for
modification is looked up in ESTAB.
• This value is then added to or subtracted from the indicated location in memory.
• The last step performed by the loader is that transferring of control to the loaded program
to begin execution.

17
MACHINE INDEPENDENT LOADER FEATURES
AUTOMATIC LIBRARY SEARCH
• An automatic library search process is used for handling external references.
• This feature allows a programmer to use standard subroutines without explicitly
including them in the program to be loaded.
• The subroutines are automatically retrieved from a library as they are needed during
linking.
Automatic library call
• The subroutines called by the loaded program are automatically taken from the library
and linked with the main program and loaded.
• The programmer does not need to take any action but he has to mention the subroutine
names as external references in the source program.
• This feature is referred to as automatic library call.
Handling external references
• Linking loaders that support automatic library search must take care of undefined
external symbols that are referred.
• In the following ways, loaders can handle external references.
1. Enter the symbols from refer record into the symbol table (ESTAB) unless these
symbols are already present.
2. Undefined symbols are marked and when the definition is encountered, these symbols
are filled.
3. At the end of pass1, the symbols in ESTAB that remain undefined indicates
unresolved external references.
4. The loader searches the libraries that contain the definition of these unresolved
symbols and processes the subroutines found by this search.
5. The subroutines taken from a library in this way it may themselves contain external
references. So, it is necessary to repeat the library search process until all references
are resolved.
6. After the library search the remaining unresolved external references are treated as
errors.
Search of libraries using file structure
• The libraries to be searched by the loader mainly contain assembled or compiled version
of the subroutines (ie object programs).
1. A special file structure is used for the library search.
2. This structure contains a directory.
3. A directory contain the name of each subroutine and a pointer to the subroutine’s
address.
4. The library using the above file structure involves a search of the directory and read
the object program (subroutines).
LOADER OPTIONS
• Many loaders have a special command language that is used to specify options.
• The following are some of the loader options that can be selected at the time of loading &
linking.

18
SPECIFYING ALTERNATIVE SOURCES OF INPUTS
• This loader option allows the selection of alternative sources of input.
Ex: INCLUDE program-name (library-name)
• The above command will make the loader to read the given object program from a library
and treat it as a part of the primary loader input.
CHANGING OR DELETING EXTERNAL REFERENCES
• Using the option it is also possible to change external references within the programs
being loaded or linked.
Ex: CHANGE name1, name2.
• The above command will change the external symbol name1 to name2.
• Some options allow the user to delete external symbols or entire control sections.
Ex: DELETE CS-name.
• The above command will delete the control section cs-name from the loaded program.
Ex: INCLUDE READ (UTLIB)
INCLUDE WRITE (UTLIB)
DELETE RDREC, WRREC
CHANGE RDREC, READ
CHANGE WRREC, WRITE
1. The above commands make the loader to include control sections READ and WRITE
from the library UTLIB.
2. And delete the control sections RDREC and WRREC from the load.
3. The first CHANGE command will cause all external references to symbol RDREC
will be changed to refer to symbol READ.
4. Similarly, references to WRREC will be changed to WRITE.
CONTROLLING AUTOMATIC PROCESSING OF EXTERNAL REFERENCES
• The common loader option involves the automatic inclusion of library subroutines to
specify external references.
• Most loader allows the user to specify alternative libraries to be searched.
Ex: LIBRARY MYLIB
• The above mentioned user-defined libraries are normally searched before the standard
system libraries.
• The unresolved symbols in the library search can be specified in the following way.
Ex: NOCALL STDDEV, CORREL
• The above command instruct the loader that these external references are remain
unresolved.
LOADER DESIGN OPTIONS
• Two alternative design options for linking loaders are:
1. Linkage editors
2. Dynamic linking
LINKAGE EDITORS
• It is found on many computing system instead of or in addition to the linking loader.
• It performs linking prior to load time.
• A linkage editor produces a linked version of the program which is written to a file or
library instead of being immediately loaded into memory.
• A linked program is also called as load module or an executable image.

19
• When the user is ready to run the linked program, a simple relocating loader can be used,
to load the program into memory.
DIFFERENCE BETWEEN LINKAGE EDITOR & A LINKING LOADER

Linkage Editor Linkage Loader


1. Perform linking prior to load time. 1. Perform all linking & relocation at load
2. Linked program is written to a file or time.
library instead of being immediately 2. Loads the linked program directly into
loaded into memory. memory for execution.
3. Loading accomplished in one pass with 3. External symbol table required.
no external symbol table required. 4. A linking loader searches libraries and
4. Resolving external references & library resolves external references every time
searching are performed only once when the program is executed.
when a program is executed many
times without being reassembled.

LOADED PROGRAM
• Linked program produce by the linkage editor is processed by a relocating loader.
• All external references are resolved and relocation is indicated by some methods such as
modification records or bit mask.
• Even though all linking has been performed, information about external references is
often retained in the linked program.
• This allows relinking of the program to replace control sections, modify external
references etc.,

20
FUNCTIONS OF LINKAGE EDITORS
• Linkage editor can perform many useful functions using editor commands. They are:
• Assume that a program (PLANNER) that uses many subroutines.
• One of the subroutine (PROJECT) has to be change to new version.
• After the new version of PROJECT is assembled or compiled, the linkage editor is used
to replace this subroutine in the program (PLANNER).
• The following linkage editor commands are used to perform the above work.
INCLUDE PLANNER (PROGLIB)
DELETE PROJECT {Delete from existing planner}
INCLUDE PROJECT (NEWLIB) {Include new version}
REPLACE PLANNER (PROGLIB)
• Linkage editor is also used to build packages of subroutines or other control sections that
are generally used.
• It combines the related subroutines into a package using editor commands.
Ex:
INCLUDE BLOCK (FTNLIB)
INCLUDE DEBLOCK (FTNLIB)
INCLUDE ENCODE (FTNLIB)
INCLUDE DECODE (FTNLIB)
.
.
SAVE FTN10 (SUBLIB)
• In the above command sequence, all the subroutines are linked into a module named
FTN10.
• This module is available in the directory SUBLIB.
• A search of SUBLIB, will search FTN10 instead of the separate routines.
• This method saves search time.
DYNAMIC LINKING
• Sometimes loading and linking of subroutine to the program will occur when it is first
called.
• Here, linking function is postponed until execution time.
• This type of function is called as dynamic linking or dynamic loading or load on call.
• Loading & linking of a subroutine using dynamic linking
• The following figure shows a method in which subroutines that are dynamically loaded
must be called through an operating system service request.

21
• In the above figure, the user program makes a load-and-call service request to the
operating system.
• Then the OS checks its internal tables to determine whether the subroutine is loaded or
not.
• If the subroutine is not loaded, then it is loaded from the system libraries as shown in the
below figure.

22
• Control is then passed from the dynamic loader to the subroutine being called.

• When the called subroutine compiles its processing, it returns to its caller.
• The OS then returns control to the user program that made the request.

• After the subroutine is completed, the memory that was allocated for subroutine is retain
for later use as long as the storage space is not needed for other processing.
23
• If a subroutine is still in memory, a second call request is not require for another load
operation.

• Here control is simply passed from the dynamic loader to the called routine.

USES OF DYNAMIC LINKING


1. Dynamic linking is used to allow several execution programs to share one copy of a
subroutine or library.
2. In an object-oriented system, dynamic linking is often used for references to software
objects.
3. Dynamic linking is used to load the subroutines only when they are needed.
• This avoids the unnecessary loading of some subroutines.
• So, dynamic linking saves the time and also memory space.
BOOTSTRAP LOADER

• If the question, how is the loader itself loaded into the memory? is asked, then the answer
is, when computer is started – with no program in memory, a program present in ROM
( absolute address) can be made executed – may be OS itself or A Bootstrap loader,
which in turn loads OS and prepares it for execution.
• The first record ( or records) is generally referred to as a bootstrap loader – makes the
OS to be loaded.
• Such a loader is added to the beginning of all object programs that are to be loaded into
an empty and idle system.
• On some computers, as absolute program is permanently resident in a read-only memory.
• When some hardware signal occurs, the machine begins to execute this ROM program.

24
• On some computers, the program is executed directly in ROM, on others, the program is
copied from ROM to main memory and executed other.
• Some machines do not have such read-only storage.
• If the loading process requires more instructions than can be read in a single record, this
first record causes the reading of others, and these in turn can cause the reading of still
more records-hence the term bootstrap.
• The first record is generally referred to as a bootstrap loader.

UNIT II
INTRODUCTION TO COMPILERS

TRANSLATOR
• Translator is a program that takes as input a program in one programming language &
produces as output a program in another language
Need for Translator
• Machine language is complex to be learnt so a translator becomes vital
COMPILER
• It is a translator, which takes the program in high level language wholly & converts it to
machine language.
Steps involved in executing a program written in a high level programming language
1. Source program is compiled
2. Translated into object program
3. Resulting object program is loaded into memory & executed.
INTERPRETER
• It is a translator program, which converts the high level language program to its machine
equivalent line by line.
• The execution of the interpreted program is very slow.
DIFFERENT PHASES OF A COMPILER

• The compilation process is a sequence of various phases.


• Each phase takes input from its previous stage, has its own representation of source
program, and feeds its output to the next phase of the compiler.

25
LEXICAL ANALYSIS
• In lexical analysis the lexical analyzer or scanner reads the source program and separate it
into tokens.
• It is the first phase and it is called scanner
• The usual tokens are:
1. Keyword: such as DO or IF.
2. Identifiers: such as x or num.
3. Operator symbols: such as <, =, or, +, and
4. Punctuation symbols: such as parentheses or commas.
• The output of the lexical analysis is a stream of tokens, which is passed to thenext
phase; the syntax analyzer or parser.
• The parser asks the lexical analyzer for the next token whenever it needs.
Syntax analysis
• In syntax analysis, the syntax analyzer or parser groups tokens into syntactic structure
• For example, three tokens A, + and B can be grouped to A+B to get a syntactic structure.
• Expression might further be combined to form statements.
• If the token is an identifier the type of the identifier is entered into the symbol table by
the syntax analyzer.
• It checks if the token occur in patterns that are permitted by the specification of the
source language.
• On seeing the invalid syntax the parser detects the error situation.
• For Eg: if the program has an expression
A+/B
• On seeing the “/” the syntax analyzer will detect an error situation.
Output of syntax analyzer is a parse tree
• For Ex: The expression
A:=B+C
• Can be represented using the following parser tree

Assignment
statement

expression
identifier
:=

A
expression expression expression

identifier identifier
+

B C

26
Intermediate code generation
• The intermediate code generator transform the parse tree into an intermediate language
representation of source program.
• The preceding parse tree can be converted into the three address which follows:
T1=B+C
A=T1
Where T1 is a temporary variable.
Code optimization
• It is designed to improve the intermediate code, so that the final object program runs
faster and takes less space.
• Output of code optimizer is another intermediate code which does the same job as the
previous intermediate code, but with much efficiency.
• Thus the code optimizer would optimize the preceding 3 address code as A=B+C
1. Optimization compiler
• Object program that is frequently executed should be fast & small
optimization compiler attempts to produce a better target program than would
be produced with no optimization
2. Local optimization
• Local transformation can be applied to a program
• Ex: The statements
If A>B goto L2
Goto L3
L2
Can be replaced by
If A<=B goto L3
3. Elimination of common sub-expression
• Common sub-expression may be eliminated from the program
• For Ex: Consider the following sequence of statements:
A=B+C+D
E=B+C+F
Which can be evaluated as
T1=B+C
A=T1+d
E=T1+F
4. Loop optimization
• Speed-ups of loops should be considered computations that do not vary, every
time the loop is entered can be removed out of the loop to increase the speed
of execution.
• Eg:
for(i=1; i<10; i++)
{
m=1;
.
.
}

27
Can be replace by
m=1;
for(i=1; i<10; i++)
{
.
.
}
Code generation
• Code generator generates the object code.
Function of code generator
1. Selects code
2. Selects registers
Main responsibility of the code generator
• Code generation phase converts the intermediate code into a sequence of target code.
Semantic analysis
• It can be done during the syntax analysis or intermediate code generation or final code
generation phase.
• It analyses if the statements are meaningful
Function
• It is used to determine if the type of intermediate result is legal.

COMPILER DESIGN OPTIONS


1. Division into passes
2. Interpreters
3. P-code compilers
4. Compiler-compilers
Division into passes
• Translation of languages may occur by one-pass compiler or Multi-pass compiler.
• Ex: In some languages, the declaration of an identifier (variable) may appear after it has
been used in the program.
• Here, occur forward references, forward reference to data item cause much serious
problem.
Ex: x:=y+z
• In the above example, if the variable are a mixture of REAL & INTEGER types, one or
more conversion operations will be needed.
• Here, the compiler cannot decide what machine instruction to generate for this statement,
unless information about the identifier is available.
• Thus, a language that allows forward references to data items cannot be compiled in one
pass, so need Multi-pass compiler.
Factors used in deciding between one pass & Multi-pass compiler design
1. If the speed of the compilation is important, then one-pass design is performed.
Ex: Some jobs require to spend large amount of time for compilations & the programs are
executed once or twice only.
In such environment, improvement in the speed of compilation can improve system
performance.
2. If programs are executed for many times for each compilation, or if they process large
amounts of data, than speed of compilation. In such a case, we might prefer a multi-pass
compiler design.
3. Multi-pass compilers are also used when the amount of memory, or other system
resources is severely limited.
28
• The requirements of each pass can be kept smaller if the work of compilation is divided
into several passes.
• If a compiler is divided into several passes, each pass becomes simpler and therefore
easier to understand, write and test.
• Different passes can be assigned to different programmers and can be written and tested
in parallel, which shortens the overall time required for compiler construction.
Interpreters
• It is a translator program which converts the high level language program to its machine
equivalent line by line.
• An interpreter process a source program written in a high-level language, just as a
compiler does.
• The main difference is that interpreters execute a version of the source program directly,
instead of translating it into machine code.
Working of interpreters
1. An interpreter usually performs lexical and syntactic analysis functions like those of a
compiler, and then translates the source program into an internal form.
2. After translating the source program into an internal form, the interpreter executes the
operations specified by the program.
3. During this pass, an interpreter can be viewed as a set of subroutines.
4. The execution of these subroutines is driven by the internal form of the program.

Factors to decide between interpreter and compiler


1. The process of translating a source program into some internal form is simpler & faster
than compiling it into machine code.
2. However, execution of the translated program by an interpreter is much slower than
execution of the machine code produced by a compiler.
3. Thus, an interpreter would not be used if speed of execution is important.
4. If speed of translation is of primary concern and execution of the translated program will
be short, then an interpreter may be preferred.
Advantages of interpreter
1. It provide good debugging facilities
2. Some languages are particularly well suited to use of an interpreter
Ex:
• Languages, such as SNOBOL, &APL, a large part of the compiled program
would consist of calls to library routines
• In such cases, an interpreter might be preferred because of its speed of translation.
3. It would be very difficult to compile some languages that use dynamic scoping instead of
usual static scoping. However, dynamic scoping can be easily handled by an interpreter.
p-code compilers
• P-code compiler (also called byte code compilers) are very similar in concept to
interpreters
• But with a p-code compiler program is analyzed & converted into an intermediate form,
which is then executed interpretively.
• But with a p-code compiler this intermediate form is the machine language for a
hypothetical computer, often called a pseudo-machine or p-machine
Translation & execution using p-code compiler
1. The source program is compiled, with the resulting object program being in p-code.
2. This p-code program is then read & executed under the control of a p-code interpreter.

29
Advantages
1. The main advantage of this approach is portability of software. It is not necessary for the
compiler to generate different code for different computers, because the p-code object
programs can be executed on any machine that has a p-code interpreter.
2. A p-code compiler can be used without modification on a wide variety of systems if a p-
code interpreter is written for each different machine.
3. The p-code object program is often much smaller than a corresponding machine code
program would be. This is particularly useful on machines with limited memory size.
Problem
• The execution of a p-code program may be much slower than the execution of the
equivalent machine code.
• Depending upon the environment however, this may not be a problem.

Solution
1. Many p-code compilers are designed for a single user running on a microcomputer
system. In that case, speed of execution may be relatively insignificant.
2. If execution speed is important, some p-code compilers support the use of machine
language subroutines.
• By rewriting a small number of commonly used routines in machine language, it is often
possible to achieve some improvements in performance.

Compiler-compilers
• A compiler-compilers is a software tool that can be used to help in the task of compiler
construction.
• Such tools are often called compiler generators or translator writing system.

30
Automated compiler construction using a compiler-compiler
1. The user (ie, the compiler writer ) provides description of the language to be translated.
2. This description may consists of a set of lexical rules for defining tokens & a grammar
for the source language.
3. Some compiler-compilers use this information to generate a scanner & a parser directly.
4. In addition to the description of the source language, the user provides a set of semantic
or code-generation routines.
5. The routine is called by the parser each time it recognizes the language construction
described by the associated rule.
6. But some compiler-compilers can parse larger section of the program before calling
semantic routine.
7. In that case, an internal form of the statements that have been analyzed such as a portion
of the parse tree may be passed to the semantic routine.
8. This latter approach is often used when code optimization is to be performed.
• Compiler-compilers frequently provide special languages, notations, data structure and
other similar facilities that can be used in the writing of semantic routines.
Advantage
1. The main advantage of using a compiler-compiler is very easy of compiler construction
& testing.
2. The object code generated by the compiler may actually be better when a compiler-
compiler is used.
• Because of the automatic construction of scanners and parsers and the special tools
provided for writing semantic routines, the compiler writer is freed from many of the
mechanical details of compiler construction.
• The writer can therefore focus more attention on good code generation & optimization.
MACHINE INDEPENDENT COMPILER FEATURES
The four independent compiler features are,
1. Structured variables
2. Storage allocation
3. Block-structured languages
4. Machine independent code optimization
31
Structured variables
• The compilation of program use structures variables such as arrays, records, strings &
sets.
• We are primarily concerned with the allocation of storage for such variables & with the
generation of code to reference them.
Storage allocation for variables
Single dimensional array declaration
Ex: A: ARRAY[1……10] of INTEGER // Pascal array declaration
• If each INTEGER variable occupies one word of memory then we must clearly allocate
ten words to store the above array.
• If an array is declared as,
B: ARRAY [l….u] of INTEGER
• Then we must allocate u-l+1 words of storage for the array
Multi dimensional array declaration
• Allocation for a multi-dimensional array is not much more difficult
Ex: B: ARRAY [0..3,1..6] OF INTEGER //4 rows , 6 columns
• Here the first subscript can take four different values (0-3) and the second subscript can
take six different values (1-6).
• We need to allocate a total of 4*6 = 24 words to store the array.
• If the declaration is,
ARRAY[l1…..u1, l2…..u2] of INTEGER
• Then the number of words to be allocated is given by,
(u1-l1+1)*(u2-l2+1)
• For an array with n dimensions, the number of words required is product of n such terms.
Methods for storing arrays
• Two methods for storing arrays are,
1. Row-major order
All array elements that have the same value of the 1st subscript are stored in
contiguous locations, this is called row-major order.
Storage of B: ARRAY[0…3, 1….6] IN ROW MAJOR ORDER
0,1 0,2 0,3 0,4 0,5 0,6 1,1 1,2 1,3 1,4 1,5 1,6 2,1 2,2 2,3 2,4 2,5 2,6 3,1 3,2 3,3 3,4 3,5 3,6

2. Column major order


All elements that have the same value of the second subscript are stored together, this
is called column-major order
Storage of C: ARRAY[0…2, 1….6] IN COLUMN MAJOR ORDER
0,1 1,1 2,1 0,2 1,2 2,2, 0,3 1,3 2,3 0,4 1,4 2,4 0,5 1,5 2,5 0,6 1,6 2,6

In row major order, the rightmost subscript varies most rapidly, in column major
order, the left most subscript varies more rapidly
Referring array element
• To refer to an array element, we must calculate the address of the referenced element
relative to the base address of the array.
Ex: One-dimensional array
A: ARRAY [1…10] OF INTEGER
1. Suppose a statement refers to array element A[6]
2. There are five array elements preceding A[6]
3. On a SIC machine, each such element would occupy 3 bytes
4. Thus the address of A[6] relative to the starting address of the array is given by 5*3=15

32
Code generation for array references
1. If an array reference involves only constant subscripts Ex: A[6], the relative address
calculation can be performed during compilation
2. If the subscripts involve variables Ex: A[i], however the compiler must generate object
code to perform this calculation during execution
Ex: A: ARRAY [l…u] OF INTEGER //array declaration
1. Suppose each array element occupy w bytes of storage
2. If the value of the subscript is S, then the relative address of the referenced array element
A[S] is given by,
W*(s-l)
3. The generation of code to perform such a calculation is illustrated in following figure
Code generation for Array references
A: ARRAY [1….10] OF INTEGERS
.
.
A[J]:=5

1) –I =1, i1
2) *i1=3, i2
3) := =5, A[i2]
4. The notation A[i2] in quadruple 3 specifies that the generated machine code should refer
to A using indexed addressing, after having placed the value of i2 in the index register
Storage allocation
• There are two types of storage allocation
1. Static allocation
2. Dynamic allocation
Static allocation
• Static allocation of memory is carried out during compile time.
• It is often used for languages that do not allow the recursive use of procedures (or)
subroutines and do provide for the dynamic allocation of storage during execution
Problem
• If procedures may be called recursively, static allocation cannot be used.
Ex:
1. In the following figure, the program MAIN has been called by the OS (or) the leader
(invocation 1)

2. The first action taken by MAIN is to store the return address from register at a fixed
location RETADR within MAIN

33
1. In the above figure, MAIN has called the procedure SUB(invocation 2)
2. The return address for this call has been stored at a fixed location within SUB.

Recursive invocation of a procedure using static storage allocation


1. In the above figure, SUB calls itself recursively
2. Here a problem occurs
3. SUB stores the address for invocation 3 into RETADR from register 1.
4. This destroys the return address for invocation.
5. As a result there is no possibility for a correct return to MAIN
Solution for problem
• For recursive call, use dynamic storage allocation technique (ie., automatic storage
allocation)
Dynamic storage allocation
• When a recursive call is made, it is necessary to preserve the previous values of any
variables used by SUB, including parameters, temporaries, return addresses, register save
areas etc.,
• This is accomplished with a dynamic storage allocation technique.
1. Automatic storage allocation
• This is one type of dynamic storage allocation that automatically allocate storage
for variable. It is not controlled by programmer. It is used when the procedure is
called recursively.
1. In this method, each procedure call creates an Activation record.
2. Activation record contains storage for all the variables used by the procedure
3. If the procedure is called recursively, another activation record is created.
4. Each activation record is not deleted until a return has been made from the
corresponding invocation.
34
5. The starting address for the current activation record is usually contained in a
base register (Ex: B) which is used by the procedure to address its variable.
6. Activation record are typically allocated on a stack, with the current record at
the top of the stack.
Ex: Invocation of a procedure using automatic storage allocation

1. In the above figure, procedure MAIN has been called, & its activation record appears on
the stack.
2. The base register B has been set to indicate the starting address of this current activation
record.
3. The first word in an activation record contain a pointer PREV, that point to the previous
record on the stack.
4. Here this record is the first, so the pointer value is null.
5. The second word of the activation record contains a ptr NEXT, which will be the starting
address for the next activation record created.
6. The third word contains the return address for this invocation of the procedure, and the
remaining words contain the values of variable used by the programmer.
Invocation of a procedure using automatic storage allocation

1. In the above figure, MAIN has called the procedure SUB.


2. A new activation records has been created on the top of the stack, with register B set to
indicate the new current record.

Recursive invocation of a procedure using automatic storage allocation

1. In the above figure, SUB has called itself recursively.


2. Another activation record has been created for this current invocation of SUB.
3. Note that the return addresses and variable value for the two invocations of SUB are kept
separately by this process.

35
What happens when procedure returns to its caller?
1. When a procedure returns to its caller, the current activation record ( which corresponds
to the most recent invocation) is deleted.
2. The pointer PREV in the deleted record is used to reestablish the previous activation
record as the current one and execution continues.
Ex: SUB returns from a recursive call

1. In the above figure shows that stack as it was appear after SUB returns from the recursive
call.
2. Register B has been reset to point to the activation record for the previous invocation of
SUB.
Rules for automatic storage allocation
1. When automatic allocation is used, the compiler must generate code for references to
variables using some sort of relative addressing.
2. The compiler must also generate additional code to manage the activation records
themselves.
• At the beginning of each procedure there must be code to create a new activation record
linking it to the previous one and setting the appropriate pointer. This code is often called
a prologue for the procedure.

36
• At the end of the procedure, there must be code to delete the current activation record,
resetting pointers as needed. This code is often called an epilogue.
Other types of dynamic storage allocation
1. In FORTRAN 90, the statement
ALLOCATE (MATRIX (ROWS, COLUMNS))
Allocates storage for a dynamic array, MATRIX with the specified dimensions. The
statement,
DEALLOCATE (MATRIX)
Releases the storage assigned to matrix by previous ALLOCATE
2. In PASCAL, the statement
NEW (P)
Allocates storage for a variable and sets the pointer P to indicate the variables just
created. The statement
DISPOSE (P)
Releases the storage that was previously assigned to the variable pointed to by P.
3. In C, the statement,
MALLOC (SIZE)
Allocates a storage block of size specified, and returns a pointer to it. The function
FREE(P)
Frees the storage indicated by the pointer P, which was returned by a previous MALLOC.
Block structured variables
• In some languages, a program can be divided into units called blocks.
• A block is a pointer of a program that has the ability to declare its own identifier

1. In the above figure, shows the outline of a block-structured program in a PASCAL like
language.
2. Each procedure form a block.
3. In block structured program, blocks may be nested within other blocks. In the above
example, procedures B & D are nested within procedure A, & procedure C is nested
within procedure B.
4. Each block may contain a declaration of variables.
5. A inner block may also refer to variables that are defined in any outer block, but the same
names are not redefined in the inner block.
Compiling & execution of block-structured programs
1. In compiling a program within in a block-structured language, it is convenient to number
the blocks as shown in above figure.
2. The compiler construct a table that describes the block structure as shown below
37
3. The table contains the details of block name, block number, block level and surrounding
block.
4. The block-level entry gives the nesting depth for each block.
5. The outermost block has a level number of 1, and each other block has a level number
that is one greater than that of the surrounding block.
Searching of identifiers in symbol table
• Same name can be declared more than once in a program in different blocks.
• So there can be several symbol-table entries for the same name.
• The entries that represent declarations of the same name by different blocks can be
linked together in the symbol table with a chain of pointers.
• When a reference to an identifier appears in the source program the compiler must first
check the symbol table for a definition of that identifier by the current block.
• Id=f no such definition is found, the compiler looks for a definition by the block that
surrounds the current block, then by the block that surrounds that, and so on.
• If the outermost block is reached without finding a definition of the identifier, then the
reference is an error.
• The search process just described can easily be implemented within a symbol table that
uses hashed addressing.
Access to variables in surrounding block
• One common method for providing access to variables in surrounding block uses a data
structure called a display.
• The display contains pointers to the most recent activation records for the current block
and for all blocks that surround the current one in the source program.
• When a block refers to a variable that is declared in some surrounding block, the
generated object code uses the display to find the activation record that contains this
variable.
• Ex: The use of display is illustrated in the following figure. Here data structure display is
used for pascal procedure that is discussed previously.
1. Assume that procedure A has been invoked by the system, A has then called
procedure B, and B has called procedure C. The resulting situation is shown in
following figure.

• The stack contains activation records for the invocations of A, B, C.


• The display contains pointers to the activation records for C & for the
surrounding blocks (A & B)

38
2. Let us assume procedure C calls itself recursively.

• Another activation record for C is created on the stock as a result of this call.
• The display pointer for C is changed accordingly.
• Variables that correspond to the previous invocation of C are not accessible
for the record.
3. Suppose now that procedure C calls D. The resulting stack & display are shown
below

• An activation record for D has been created the usual way & added to the
stack.
• Note, however, that the display now contains only two pointers : one each to
the activation records for D & A.
• This is because procedure D cannot refer to variables in B (or) C.
• Procedure D can refer only to the variables that are declared by D (or) by
some block that contains D in the source program (in this case, procedure A)

4. In the above figure, procedure D now calls B.


• Procedure B is allowed to refer only to variables declared by either B (or) A.
39
• The compiler for a block-structured language must include code at the
beginning of a block to initialize the display for that block. At the end of the
block, it must include code to restore the previous display contents.

Machine independent code optimization


• Code optimization involves complex analysis & various transformation of the
intermediate code without varying the logic of the program.
• Machine independent optimization is performed independent of the target machine. The
code optimization techniques are:
1. Loop optimization
2. Elimination of induction variables
3. Reduction in strength
4. Elimination of local common sub expression
5. Loop unrolling
6. Loop jumping
Loop optimization
• It is an important machine independent optimization.
• It involves elimination of loop invariant computation
Elimination of loop invariant computation
• A loop invariant computation computes the same value every time a loop gets executed.
• Therefore moving such a computation outside the loop leads the reduction in the
execution time.
Eg: for(i=1;i<=0;i++)
{
m=1;
.
.
}
And can be replaced by
m=1;
for(i=1;i<=0;i++)
{
.
.
}
Elimination of induction variables
Eg: Intermediate Code Representation
1 PROD:=O
2 I:=1
3 T1:=4*i
.
.
10 I:=i+1
11 if I<=20 goto (3)
• The purpose of I is to count from 1 to 20.
• As T1:=4*I, and ‘I’ takes the value 1….20 (ie) I=20, T1=4*20, T1=80(“T1” progresses
as “I” progresses).
• T1 and I form arithmetic progression such identifiers are called induction variables.
• The induction variable should be eliminated as for as possible.
• So “I” may be represented in forms of “T1”.
40
• Eg”
T1=4*I can be written as
T1:=T1+4
And “if I<20 goto 3” can be represented as
“if T1<=76 goto 3”
Reduction in strength
• Replacement of an expensive operation by a cheaper one is reduction in strength.
• Eg: Multiplication step T1:=4*I is replaced by T1=T1+4.
• This will speed up the object code as addition takes less time than multiplication.
Elimination of local common sub expression
• The common sub expression in a program can be automatically detected if we construct a
DAG (Directed Acyclic Graph)
• If the interior needs in DAG have more than one label, then these represents the common
sub expression which can be detected and eliminated.
• Ex: Let us now see how to construct a DAG .
• Consider the following intermediate code representation:
1. S1:=4*I
2. S2:=I/J
3. S3=:=S1+S2
4. S4:=4*I
5. S5:=S4+B
• Parse tree or DAG for the above explanation is
S5+ +S3

B S1 , S4 S2 /
*

4 I J
• S1 & S4 are common sub expressions. This can be eliminated as shown below:
1. S1:=4*I
2. S2:=I/J
3. S3=:=S1+S2
4. S5:=S4+B
S5+ +S3

B S1 S2 /
*

4 I J
Loop unrolling
• This deals with reducing the number of tests carried out if the number of iteration is
constant.
i=1;
while (i<=100)
{
x[i]=0;
i++;
}
41
“i<=100” is performed 100 times
• This sequence can be replaced by the following set of statements
i=1;
while (i<=100)
{
x[i]=0;
i++;
x[i]=0;
i++;
}
• Replication of body will reduce the number of checking process up to 50%.
Loop jamming
• This is a technique of merging the bodies of two loops if they have the same number of
iterations.
for(i=0;i<=10;i++)
x[i]=0;
for(i=0;i<=10;i++)
y[i]=1;
• Body of two ‘for’ loops having the variable “I” within the same range can be
concatenated.
Result will be
for(i=0;i<=10;i++)
{
x[i]=0;
y[i]=1;
}
Advantages gained by code optimization
1. Codes can be made to run faster.
2. Codes may be made to take less space.
3. Execution efficiency of the object code is achieved.

MACHINE-DEPENDENT COMPILER FEATURES


Intermediate form of the program
• Intermediate code is a stream of simple instruction.
• It is similar to the assembly language instruction except that the register need not be
specified
• Examples of intermediate code are address code, quadruples, triples etc.,
Quadruples
• Quadruples is of the form:
Operation, op1, op2, result
• Where operation is some function to be performed by the object code.
• Op1 & op2 are the operands for this operation and result designates the resulting value is
to be placed.
• Example1: The source program statement
SUM:=SUM+VALUE
could be represented with quadruples
+, SUM, VALUE, i
:=, i, , SUM

42
• Example2: The statement
VARIANCE:=SUMS DIV 100 – MEAN * MEAN
could be represented with quadruples
DIV, SUMS, #100, i1
*, MEAN, MEAN, i2
-, i1, i2, i3
:=, i3, , VARIANCE
Code optimization on quadruples
• Many types of analysis and manipulation can be performed on the quadruples for code-
optimization purpose.
• The quadruples can be rearranged to eliminate redundant load and store operations
• And the intermediate results ij can be assigned to registers or to temporary variables to
make their use as efficient as possible.
• After optimization has been performed the modified quadruples are translated into
machine code.
Advantage
• The quadruples appear in the order which the corresponding object code instruction is to
be executed.
• This greatly simplifies the task of analyzing the code for purposes of optimization.
• It also means that the translation into machine instructions will be relatively easy.

Machine dependent code optimization

• Different possibilities for performing machine-dependent code optimization:


1. Machine instruction that use register as operands are usually faster than the
corresponding instruction that refer to locations in memory.
Therefore, we would prefer to keep in register all variables and intermediate results
that will be used later in the program.
Each time a value is fetched from memory, or calculated as an intermediate result, it
can be assigned to same register.
The value will be available for later use without requiring a memory reference.
This approach also avoids unnecessary movement of values between memory and
registers which takes time.
2. We can replace the register value when it is necessary to assign a register for some
other purpose.
Such register assignments can also be used to eliminate the need of temporary
variable.
3. In making and using register assignments, a compiler must also consider the control
flow of the program.
The existence of jump instruction creates difficulty in keeping track of register contents.

43
Solution

1. One way to deal with this problem is to divide the program into basic blocks.
A basic block is a sequence of quadruples with one entry point, which is at the
beginning of the block, one exit point, which is at the end of the block, and no jumps
within the block.
When control passé from one basic block to another, all values currently held in
registers are saved in temporary variables.

An arrow from block x to block y indicates that control can pass directly from the last
quadruple of x to the first quadruple of y. this kind of representation is called as flow
graph.

2. Another possibility involves rearranging quadruples before machine code is


generated.
3.

The value of the intermediate result i1, is calculated first and stored in temporary
variable t1.
Then the value of i2 is calculated.
The third quadruple in this series calls for subtracting the value of i2 from i1.
Since i2 has just been computed, its value is available in register A.
It is necessary to store the value of i2 in another temporary variable t2, and then load
the value of i1 from t1 into register A before performing the subtraction.
44
With a little analysis, an optimizing compiler could recognize this situation and
rearrange the quadruples so the second operand of the subtraction is computed first.
The first two quadruple in the sequence have been interchanged.
The resulting machine code requires two fewer instructions and uses only one
temporary variable instead of two.

4. Other possibilities involve taking advantage of specific characteristics and


instructions of the target machine.
For example, there may be special addressing modes that can be used to create more
efficient object code.
On some computers there are high-level machine instructions that can perform
complicated functions such as calling procedures and manipulating data structures in
a single operation.

45
OPERATING SYSTEM
UNIT III
Introduction
Operating System - It is software that controls hardware. It is comprised of system software,
or the fundamental files needs to boot up and function.
An Operating System (OS) is an interface between computer user and computer hardware. An
operating system is software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices.
Functions of Operating System
 Implementing user interface
 Sharing hardware among users
 Allowing users to sharing data among themselves
 Preventing users to take another user data
 Scheduling resources among users
 Facilitating input and output
 Recovering from errors
 Accounting for resource usage
 Facilitating parallel operations
 Handling network communications
Basically the Operating System
 Text oriented Operating System
o IMB PC DOS
o Microsoft MS DOS
o UNIX
 Graphical oriented Operating System
o Windows
o LINUX
o MAC OS
Definition of Process
 The word process is used by the designer the multiples system in the 1960’s
 That time process used some interchangeable with task, has been given many definitions
o a program in execution
o an asynchronous activity (not continuous)
o the animated spirit of a procedure
o the locus of control of a procedure in execution
o the dispatchable unit
Process States
 A process goes through a series of discrete process states
 Various events can cause a process to change states
 A process is said to be
o RUNNING - if is currently has the CPU
o READY - if it could use a CPU if one were available
o BLOCKED - if it is waiting for some events to happen (such as input/output
completion event) before it can proceed
 Only one process may be running at a time, but several processes may be ready, and several
may be blocked.
Process State Transitions

 Dispatch (process name) Ready -> Running


 Timerrunout (process name) Running -> Ready
 Blocked (process name) Running -> Blocked
 Wakeup (process name) Blocked -> Ready
1. Ready - a job is created and inserted at the back of the ready list.
2. Running - after it comes to the processor the CD it said to make a state transition from ready state
to running state.
 Dispatch - the assignment of the CPU to the first process on the ready list is called
dispatching, and it is performed by a system entity called the dispatcher.
 Timerrunout - if the process does not release CPU before the time interval expires, the
interrupting clock generates an interrupt, cause the operating system to regain control.
 We indicate this transition as follows:
o Dispatch (process name) Ready -> Running
o Timerrunout (process name) Running -> Ready and
o Dispatch (process name) Ready -> Running.
 Then, the next process in the ready list comes to the CPU, this process goes to the end of the
ready list
3. Blocked - if a process needs an input/output operation automatically that process is said to the
block list for input/output completion.
 Wake up - after completion of an input/output operation the process makes the transition from
blocked to ready state.
 Block (process name) Running -> Blocked
 This state is initiated by the user.
 Wakeup (process name) Blocked -> Ready
 Except blocked state, the other three transitions are initiated by entities external to the
process.
Interrupt Processing
1. An interrupt is an event that alters the sequence in which a processor executes instructions.
2. It is generated by the hardware of the computer system.
3. when an interrupt occurs:
 The hardware passes control to the OPERATING SYSTEM.
 The OPERATING SYSTEM saves the states of the interrupted process.
 Then its services that interrupt through interrupt handler routine.
 The state of the interrupted process is restoring.
 Then the Interrupted process or some other process is expected.
4. An Interrupt is initiated by a running process is known as trap and said to be Synchronous with
the operation of the process.
5. An Interrupt may be caused by some other event that may relate to the running process, it is said
to be Asynchronous with the operation of the process.
Interrupt Classes
There six interrupt classes, they are
1. Supervisor call Interrupts (SVC)
 These are initiated by a running process that executes the svc instruction
 SVC is a user generated request for performs input/output, obtaining more storage, or
communicable with the system operator.
 It helps operating system to keep secure from the user.
2. Input / Output Interrupts
 It initiated by the input/output hardware.
 (e.g.) printer out of paper, Input/output is completed, printed is not ready to print the result.
3. External Interrupts
These are caused by
 Time run of indication by the interrupt clock.
 Interrupt key by the operator
 Acknowledgment signal from another processor in multi-processor system.
4. Restart Interrupt
 The operator presses the restart button or a restart SIGP (signal processor) occurs from
another processor on a multi-processor system.
5. Program Check Interrupts
These are caused by a wide range of problems that may occur as a program’s machine language
instructions are executed.
 Division by zero
 Arithmetic overflow or underflow
 Data is in the wrong format
 To execute an invalid operation code
 Memory overflow
 Try to access a restricted resource
6. Machine Checking Interrupts
 These are caused by malfunctioning hardware.
Storage Management Real Storage
 Storage Organization – the manner in which the main storage is viewed.
 Partition – the same amount of space, or divide main storage into portions called partition.
 Storage Management – this determines how a particular storage organization performs order
various policies.
Real Storage Management Strategies
It used to obtain the best possible use of the main storage resource. It has three types:
1. Fetch Strategies
 To read and transfer the data from main storage to secondary storage.
o Demand Fetch Strategies - instruction is fetching or when running program request.
o Anticipatory Fetch Strategies- assign the next instruction or data to improve system
performance.
2. Placement Strategies
 Locating the incoming program in main memory based on various allocating tech such as first
fit, best fit, worst fit.
3. Replacement Strategies
 Displays which program or data transferred for incoming program.
Contiguous versus Non Contiguous Storage Allocation
1. Contiguous
 Each program to occupy a single contiguous b of storage locations.
 Sharing is not possible.
 Very fast access time.
2. Non Contiguous
 A process is divided into several blocks of segments that may be placed in main memory not
adjacent locations.
 It is more difficult for an operating system to control
 It is useful when operating system has many small holes instead of a single large hole.
Single User Contiguous Storage Allocation
 In this Scheme, the physical memory is dividing into two contiguous areas.
 One of them is permanently allocated to the residential position of the operating system and
another position is for user process area.

 The operating system can be loaded at the lower application (or) at the higher application
based on the interrupt service routine designed in the hardware designing.
The Working Principle is,
 ALL the ready process is held on the disk in the order of priority.
 At any time only one process runs in the main memory.
 When this process is blocked, it is swapped out from the main memory to disk.
 The next highest priority process is swapped in the main memory and starts its execution.
 The problem of relocation and translation exists only in the starting physical address of the
program.
Overlay Structure
 Programs are limited in size to the amount of main storage, but it is possible to run program
larger than the main storage by using overlays.
 Manual overlay requires careful and time-consuming planning.
 A program with a sophisticated overlay can be difficult to modify.
 In manual overlay changes can be easily accommodate.
 In sophisticated overlay it leads to time consuming.

Protection in Single User System


 Protection is implemented by the use of single boundary register built into the CPU.
 The boundary register contains the higher addresses used by the operating system.
 Each time the user program is checked with boundary register, so that the operating system
cannot be destroyed.
 If user tries to enter the operating system; it terminates the job and produce an error message.
 Sharing is not possible.
Single Stream Batch Processing
 Setup Time - job requires it during, the operating system was loaded.
 Teardown Time - when the job completed, the OS removes it tapes and drives, forms time
cards punched out.
 Stream Batch Processing - jobs are grouped in batches by loading the consecutive onto tape or
disc.
 Housekeeping Chores - perform functions automatically which was performed manually.
 Job Stream Processor - reads the job control long statement and facilitates the setup of the
next job or housekeeping chores.
Fixed Partition Multi Programming
 Even with batch processing OS, single user system still wastes considerable amount of the
computing resource

 An Input / Output request is issued, the job cannot continue until the request data is either
send or-receive.
 Input / Output speed are slow compared with CPU speed.
 This wastage of CPU utilization is overcome by Multi Programming.
 Multiprogramming requires more storage that is single user system.
 In multiprogramming several users simultaneous access the system resources.
 This increases the CPU utilization and system Throughput.
 Throughput - no. of processes completed per unit time.
 Main storage was divided into a number of fixed size partitions.

 Each partition holds a single job and it was allocated for several jobs. If one job meets Input /
Output operations that time another job is associated in CPU.
Translation and Loading
 Jobs were translated with absolute assembler and compilers to run only in a specific partition.

 If a job was ready to run and its partition was occupied, then the job had to wait, even if other
partitions were available.
 This resulted in waste of the storage resource.
 Jobs waiting for partition 3 can be split into other partition.
 But absolute translation loading these jobs run only in partition - 3.
 The other two partitions remain empty.
Relocatable Translation and Loading
 Relocating compilers, assemblers and loaders are used to produce relocatable programs that
can run in any available partition (i.e.) large among to hold them.
Protection
 Protection is implemented with several boundary registers with two registers the low & high
boundaries of a user partition is indicated.

 While the user in partition 2 all storage addresses developed by the running program are
checked to be super vision call, they fall between b & c.
 A supervisor call instruction allows the user to cross the boundary of the Operating System
and request its services.
Fragmentation
It may occur in 2 ways
 User jobs do not completely fill their designer partition.
 A partition remains unused, if it is too small to hold a waiting job.
Variable Partition Multi Programming
 No fixed boundaries jobs can have as main storage as they require.
 There is no wastage of memory inside the partition because each partition is allocated
according to the size.
Initial Partition Allocation

Storage Holes
 The wastage of memory occurs when a job completes its process and release the storage
partition particular is named as hole in the main storage area.
 These holes can be used for other jobs when assign another job to the particular hole it may
lead to small amount of wastage memory when the job size is smaller than partition size
multiprogramming.
Coalescing Holes
 The process of merging adjacent hole to form a single larger hole is called coalescing holes.
 So that we can reclaim the largest possible contiguous blocks of storage.

Storage Compaction
 Moving all occupied areas are storage to one end or other of main storage.
 This leaves the single largest storage hole instead of the numerous small holes.
 This also referred as barbing the storage or garbage collection.
Drawbacks
 Consume system resources.
 System must stop while it performs compaction which results in erratic response time.
 Relocating jobs often need to change address location of jobs.
UNIT IV
Virtual Storage Management
 Physical memory is limited, and then there is problem to execute an entire process in main
memory.
 This is overcome by virtual memory system.
 In this scheme we can keep only a part of the process image in the memory and the other part
of the disk and it is easy to execute the process.
Virtual Storage Management Strategies
1. Fetch Strategies
 To read a paging and segmentation from main storage to secondary storage.
o Demand Fetch - paging in fetch only when running program request.
o Anticipatory Fetch - assign the next paging segmentation to improve system
performance.
2. Placement Strategies
 Locating the incoming paging in the main memory based on various allocating techniques
such as First fit, Best fit and Worst fit.
3. Replacement Strategies
 Besides which paging or segmental transfer for incoming paging.
Page Replacement Strategies
 If a page is necessary chosen from the process, it is called ‘Local Replacement Policy’.
 If it can be from any outside process, it is called ‘Global Replacement Policy’.
The Page Replacement Strategies are
 The Principle of Optimality (OPT)
 First In First Out (FIFO)
 Second Chance (SC)
 Least Recently Used (LRU)
 LRU Approximation
o Not Used Recently (NUR)
o Least Frequently Used (LFU)
 Random Page Replacement (RAND)
 Clock
 Working Set
 Page Fault Frequency Page Replacement (PFF)
1. The Principle of Optimality (OPT)
 OPT removes a page that will be used not immediately but in the most distant future.
 Let us we assume that there are only three page frame 0, 1 and 2 assume that the reference
string is,

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 3
Page Frame 1 1 1 1 1 1 1 5
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss

 This shows the state of three page frame after each page reference.
 This column shows three frames with page that it will contain after the sequence.
 The fourth page reference in for page 3.
 This is not in the memory, but no page frame is also free.
 So that operating system has to choose which one (0, 1 or 2) is to be removed.
 Now according to the reference string page 8 & 2 are not used in future and page 1 will be
used in future.
 So the Operating System replaces the page 8 with page 3.
 This is an OPT example for Belady’s algorithm for OPT page replacement.

2. First In First Out (FIFO)


 The page that is removed from the memory is the one that entered it first.

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 4 4 4
Page Frame 2 2 2 2 2 1 1
Hit or Miss Miss Miss Miss Miss Hit Miss Miss Miss

 Consider the page reference string, the fourth reference is page 3 will replace page 8 because
it comes first and then next page does not cause page fault.
 The sixth page reference is page 4 will replace page 1 because it comes earlier.
 This algorithm can be implemented using ‘FIFO’ with a pointer chain, where the header is the
page that came in first, and the end of the chain is the page that comes in last.
3. Second Chance (SC)
 It tries to replace the pages which are not referenced more often.
 The system maintains a set of reference b one for each page frame.
 This reference bit is initially 0, then sets it to 1 as soon as the corresponding page frame is
referenced.
 Reference bit = 0 means that the page has not been reference and it can be replaced.
 Reference bit = 1 means that the corresponding page reference bit is sets as 0 and treated as
new arrival.
 This algorithm gives one more chance in FIFO queue page reference.

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss

FIFO Queue (Page number with reference bit value):


8(1) -> 8(1), 1(1) -> 8(1), 1(1), 2(1) -> 1(1), 2(1), 8(0) -> 2(1), 8(0), 1(0) -> 8(0), 1(0), 2(0) ->
1(0), 2(0), 3(1) -> 1(1), 2(0), 3(1) -> 2(0), 3(1), 1(0) -> 3(1), 1(0), 4(1) -> 3(1), 1(1), 4(1) ->
1(1), 4(1), 3(0) -> 4(1), 3(0), 1(0) -> 3(0), 1(0), 4(0) -> 1(0), 4(0), 5(1)

4. Least Recently Used (LRU)


 It comes close to the optimal algorithm, to page that heavily used in the last few instructions
probably be required again in the subsequent on.
 When a page fault occurs, LRU throws out page that has been unused for the longest time that
is why the name “Least Recently used”.

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss

5. LRU Approximation
 Each page associate with a bit called Reference bit, initially = 0.
 When the page is referenced, Reference bit set to 1.
 Replace the one which is 0 (if one exists). We do not know the order.
 There are two methods in it
o Not Used Recently (NUR)
o Least Frequently Used (LFU)

6. Not Used Recently (NUR)


 This considers about the page fault, but Second Chance does not bother about it.
 Two bits are used
o Reference bit (R)
o Modified bit (M)
 These two bits had four possible combination called “Classes”
Bit Values
Classes Description
Reference Modified
0 0 0 No Reference & Not Modified
1 0 1 No Reference but Modified
2 1 0 Referenced but Not Modified
3 1 1 Both Referenced & Modified

 For each page frame, 2 bits are maintained when a page is referenced, the between sets R-bit
for page frame 1. When it is modified the M-bit for that page frame to 1.
 At the particular clock interval all the pages R to 0 to differentiate between the latest
reference from the earlier once.
 This algorithm removes a page at random from the lowest numbered non empty class.

7. Least Frequently Used (LFU)


 It requires a Counter (CTR) associated with each page frame.
 CTR’s are initially set 0 for all the page frames allocated to that process.
 When a page fault occurs, the page frame replacement is out of all the page frames with the
lowest value of CTR.

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss
Page number with Counter value:
8(1) -> 8(1), 1(1) -> 8(1), 1(1), 2(1) -> 1(1), 2(1), 3(1) -> 2(1), 3(1), 1(2) -> 3(1), 1(2), 4(1) ->
3(1), 4(1), 1(3) -> 4(1), 1(3), 5(1)

8. Random Page Replacement (RAND)


 Choose any page to replace at random.
 Assumes the next page to be referenced is random.

Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 8 1 1 1 1
Page Frame 1 1 1 3 3 4 4 4
Page Frame 2 2 2 2 2 2 5
Hit or Miss Miss Miss Miss Miss Miss Miss Hit Miss

9. Clock
 Maintain a circular list of pages resident in memory.
 Use a clock (or used / referenced) bit to track how often a page is accessed.
 The bit is set whenever a page is referenced.
 Clock hand sweeps over pages looking for one with used bit = 0.
 If found then replaces the current page.
 Else reset the used bit and advance the clock pointer.
10. Working Set Model
 Trashing
o A process is thrashing if it is spending more time for paging in / out due to frequent
page faults than executing.
o the rate at which page fault occur, if the page fault occur every time after only a few
instructions, the system is said to be thrashing.
 Working Sets
o Working Sets is collection of pages the process is actively referencing, working set of
pages must be maintained in primary storage.
 Working set storage management policy
o To maintain the working sets of active programs in primary storage.
o The decision to add the new process to the active set of processes is based on whether
the sufficient space is available in primary storage.
 Drawback
o It is not possible to know in advance large a given process set will be one definition
of working set of pages.
 The value t corresponds to the current process time. The value w is the process’s working set
window size.
 The process’s working set of pages W (t, w) is defined as the set of pages referenced by the
process during time interval t - w to t.

 Mathematical definition of working set


o The real working set of process is the set of pages that must be in primary storage for
a process to execute efficiently.
 Primary Storage Allocation under the Working Set of Storage Management

o Working sets are transient.


o A process next working set name differs from its current working set.

11. Page Fault Frequency Page Replacement (PFF)


 Explicitly attempts to minimize the page faults
 When Page Fault Frequency is high (i.e., time between page faults is small) then increase
working set.
o Current page fault time - Last page fault time <= working set size, then add the
faulting page to working set.
 When Page Fault Frequency is low (i.e., time between page faults is large) then decrease
working set.
o Current page fault time - Last page fault time > working set size, remove all the pages
not referenced in Last page fault time, Current page fault time.
Demand Paging
A processes pages should be loaded on demand (i.e.) no page should be brought from secondary
to primary storage until it is explicitly referenced by running process.
 Reason for demand paging
o Computability results.
o Halting problem.
 Demand paging quantities that the page brought to main storage are those needed by
processes.
 Space Time Product - reflects the amount of storage a process uses and how long it is used.
 Problems in Demanding
o The process must wait while the new page is transfer to primary storage depending on
primary storage size.
o It increases the cost.

Page Size
 The smaller page size cause larger page tables and also the waste of storage due to
excessively large table is said to be table fragmentation.
 INPUT/OUTPUT transfers are more efficient with large pages.
 Larger page sizes may not be referenced are paged into primary storage.
 But to minimize the INPUT/OUTPUT transfer, we need large page sizes.
 The smaller page size leads to less internal fragmentation.
Paging
 The partition within the virtual storage divided into same size or fixed size then it is known as
Paging.
 When a process is running currently that page is presented in primary storage.
 When that page is transferred from secondary to primary storage within the block is called as
Page Frame.

 A running process reference a virtual storage addresses as V (p, d), where p indicates the page
number and d indicated displacement value.

Segmentation
 Segmentation and Paging schemes share a lot of common principles of operations except
pages are physical divisions of fixed size in memory, where as segments are logical (or)
virtual division of a program in variable size.
 Each program has the following divided into segments.
 Each program has the following major segments
o Code
o Data
o Stack
 Each of these can be further divided into segments.
 Each segment is compiled with respect to as the starting address for the segment.
 The user program is compiled and the compiler automatically constructs segment reference
by the input program.

 A running process reference a virtual storage addresses as V (s, d), where s indicates the
segment number and d indicated displacement value.
Comparison between Segmentation and Paging

Segmentation Paging

Given memory is divided into number of Given memory area divided into number of
segments of different size. segments of equal size.

Only the needed segments are loaded into


All the pages are into the memory.
the memory.
Sharing is possible. Sharing is not possible.
Difficult to manage variables size segments Not so difficult to manage equal size in
in secondary memory. secondary memory.

Size of the segments can be changed


Not possible to change the size.
dynamically.

Processor Management
Job and Processor Scheduling
 The problems of determining word processes should be assigned and to which processor is
called processor scheduling.
 Scheduling Levels
1. High Level
 It is also called as job scheduling or long term scheduling
 It determines which jobs shall be allowed to utilize the resources of the system.
 It is also referred as admission scheduling.
2. Intermediate Level
 Determines which jobs shall be allowed to utilize the CPU. It response to suspend active
processor.
 It acts as a buffer between the admission of job to the system and the assigning of the CPU to
these jobs.
3. Low Level
 It is also known as dispatcher and dispatches the CPU to the process.
 This dispatcher operates many times per second.
 It often assigns a priority to each process.
Scheduling Objectives
 Be Fair - all the processes are treated the same and no deadlock.
 Maximize Throughput - to service the largest possible number of processes per unit time.
 Maximize Users - receiving acceptable response time.
 Be Predictable - a given job should complete based on time and cost.
 Maximize Overhead
o This will improve overall system performance.
o Achieve a balance b/w response and utilization.
o Enforce priority.
o Avoid indefinite postponement.
o Degrade gracefully under heavy loads.
Preemptive
 Once the process has given to the CPU, the CPU can take away from that process.
 This is useful in system in which high priority process requires rapid attention.
 Interactive Time Sharing System guarantees acceptable response times, change at any time.
 Preemption is not without cost because many processes must be kept in main storage.
Non Preemptive
 Once the process has been given to CPU, the CPU cannot be taken away from that process.
 All the processes have equal priority.
 Short jobs are made to wait by longer jobs.
 Response times are more predictable.
Priorities
 Priorities may be assigned automatically by system or they may be assigned externally.
 Static
o They do not change the mechanism are easily implement and have relatively low
overhead.
o They are not responsive to change in environment.
 Dynamic
o The mechanisms are responsive to change the initial priority may have only a short
duration.
o These are more complex to implement and have greater overhead.
 Earned (or) Bought (or) Purchase Priority
o This is provided for a member of the community need special treatment.
o A user with a rush job may be willing to pay a premium (i.e.) purchase priority for a
high level of service.
o If there were no extra charge, then all users would request the higher level of service.
 Rationally Assigned (or) Arbitrarily Assigned
o Which a system mechanism needs.

Comparison between Preemptive and Non Preemptive

Preemptive Non Preemptive

No continuous processing Continuous processing

Only one job in main storage which is


Many jobs in main storage
currently running.

A process has less chance of completion A process has high chance of completion

Main storage got wasted because of storing Main storage not wasted because of storing
many jobs currently running job

Consumes high time because many jobs are Consumes less time because one job is
involved involved
Deadline Scheduling

 Jobs are scheduling to be completed by a specific time or deadline.


 These jobs may have high values when it is completed on or before deadline otherwise they
are worthless.
 Reasons
o Required resource information need in advance by the user it is rarely available.
o Unpredictable demands on the system.
o Should not disturb other user’s service.
o Scheduling becomes so complex.
o Operating system designers should take care for complex situation.
Various Scheduling Methods
1. First-Come First-Serve (FCFS)

The simplest scheduling algorithm is First Come First Serve (FCFS). Jobs are scheduled in
the order they are received.

Calculate the turnaround time, waiting time, average turnaround time, average waiting time,
throughput and processor utilization for the given set of processes that arrive at a given arrive time.
Process Arrival Time Processing Time
P1 0 3
P2 2 3
P3 3 1
P4 5 4
P5 8 2
If the processes arrive as per the arrival time,

Time Process Completed Turnaround Time Waiting Time


0 - - -
3 P1 3–0=3 3–3=0
6 P2 6–2=4 4–3=1
7 P3 7–3=4 4–1=3
11 P4 11– 5 = 6 6–4=2
13 P5 13 – 8 = 5 5–2=3

Average turnaround time = (3+4+4+6+5) / 5 = 4.4


Average waiting time = (0+1+3+2+3) / 5 = 1.8
Processor utilization = (13/13)*100 = 100%
Throughput = 5/13 = 0.38
2. Shortest-Job First (SJF)

This algorithm is assigned to the process that has smallest next CPU processing time, when
the CPU is available. In case of a tie, FCFS scheduling algorithm can be used.

As an example, consider the following set of processes with the following processing time
which arrived at the same time.
Process Processing Time
P1 06
P2 08
P3 07
P4 03

Using SJF scheduling because the shortest length of process will first get execution,

Time Process Completed Turnaround Time Waiting Time


0 - - -
3 P4 3–0=3 3–3=0
9 P1 9–0=9 9–6=3
16 P3 16 – 0 = 16 16 – 7 = 9
24 P2 24 – 0= 24 24 – 8 = 16

Average turnaround time = (3+9+16+24) / 4 = 13


Average waiting time = (0+3+9+16) / 4 = 7
Processor utilization = (24/24)*100 = 100%
Throughput = 4/24 = 0.16

3. Round Robin (RR)

Each process is allocated a small time-slice called quantum. No process can run for more
than one quantum while others are waiting in the ready queue. If a process needs more CPU time to
complete after exhausting one quantum, it goes to the end of ready queue to await the next
allocation.

Consider the following set of process with the processing time given in milliseconds with the
Quantum of 4 milliseconds.
Process Processing Time
P1 24
P2 03
P3 03
Process Processing Time Turnaround Time Waiting Time
P1 24 30 – 0 = 30 30 – 24 = 6
P2 03 7–0=7 7–3=4
P3 03 10 – 0 =10 10 – 3 = 7

Average turnaround time = (30+7+10)/3 = 47/3 = 15.66


Average waiting time = (6+4+7)/3 = 17/3 = 5.66
Processor utilization = (30/30) * 100 = 100%
Throughput = 3/30 = 0.1

4. Shortest Remaining Time Next (SRTN)

This permits a process that enters the ready list to preempt the running process if the time for
the new process (or for its next burst) is less than the remaining time for the running process (or for
its current burst).

Consider the set of four processes arrived as per timings described in the table:
Process Arrival time Processing time
P1 0 5
P2 1 2
P3 2 5
P4 3 3

Using SRTF scheduling

Process Completed Turnaround Time Waiting Time


P1 10 – 0 = 10 10 – 5 = 5
P2 3–1=2 2–2=0
P3 15- 2 = 13 13 – 5 = 8
P4 6–3=3 3-3=0

The average turnaround time is (10+2+13+3) / 4 = 7


The average waiting time = (5+0+8+0) / 4 = 3.25milliseconds
Processor utilization = (15/15) * 100 = 100%
Throughput = 4 /15 = 0.26
5. Priority Based Scheduling or Event-Driven (ED) Scheduling

Equal priority processes are scheduled FCFS. The level of priority may be determined on the
basis of resource requirements, processes characteristics and its run time behaviour.

As an example, consider the following set of five processes, assumed to have arrived at the
same time with the length of processor timing in milliseconds: –
Process Processing Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

These processes are scheduled according to the priority scheduling

Time Process Completed Turnaround Time Waiting Time


0 - - -
1 P2 1–0=1 1–1=0
6 P5 6–0=6 6–2=4
16 P1 16 – 0 = 16 16 – 10 = 6
18 P3 18 – 0 = 18 18 – 2 = 16
19 P4 19 – 0 = 19 19 – 1 = 18

Average turnaround time = (1+6+16+18+19) / 5 = 60/5 = 12


Average waiting time = (6+0+16+18+1) / 5 = 8.2
Processor utilization = (30/30) * 100 = 100%
Throughput = 5/19 = 0.26

6. Highest Response Ratio Next (HRRN)


Out of all the available processes, CPU is assigned to the process having highest response
ratio. In case of a tie, it is broken by FCFS Scheduling. It operates only in non-preemptive mode.

Where W = Waiting time of the process so far and B = Burst time or Service time of the
process.

Consider the set of 5 processes whose arrival time and burst time are given below
Process Processing Time Priority
P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2
At t = 0, only the process P0 is available in the ready queue.

At t = 3, only the process P1 is available in the ready queue.

At t = 9, the processes P2, P3 and P4 are available in the ready queue.


 Response Ratio: P2 = [(9-4) + 4] / 4 = 9 / 4 = 2.25, P3 = [(9-6) + 5] / 5 = 8 / 5 = 1.6 and
P4 = [(9-8) + 2] / 2 = 3 / 2 = 1.5

At t = 13, the processes P3 and P4 are available in the ready queue.


 Response Ratio: P3 = [(13-6) + 5] / 5 = 12 / 5 = 2.4 and P4 = [(13-8) + 2] / 2 = 7 / 2 = 3.5

At t = 15, only the process P3 is available in the ready queue.

Process Completed Turnaround Time Waiting Time


P0 3–0 =3 3–3=0
P1 9–2=7 7–6=1
P2 13 – 4 = 9 9–4=5
P4 15 – 8 = 7 7–2=5
P3 20 – 6 = 14 14 – 5 = 9

Average turnaround time = (3+7+9+7+14) / 5 = 40/5 = 8


Average waiting time = (0+1+5+5+9) / 5 = 4
Processor utilization = (20/20) * 100 = 100%
Throughput = 5/20 = .25
UNIT V
Device and Information Management
Disk Performance Optimization
Operation of Moving Head Disk Storage
 Data is recorded on a series of magnetic disks or platters.
 These disks are connected by a common spindle that spins at very high speed.
 The data is accessed (i.e. either read or write) by a series of read-write heads, one head per
disk surface.
 A read-write head can access only data immediately adjacent to it.

 So the portion on schematic of a moving hard disk which the data is to be written must rotate
until it is immediately below (or) above the read/write head
 The time is takes for data to rotate from its current position to a position adjacent to the
read-write head is called “rotational latency-time”
Actuator or Boom or Moving Arm Assembly
 All read-write heads are attached to a single boom or moving arm assembly or actuator.
 When the boom moves the read-write heads to a new position, a different set of circular tracks
become accessible.
 Cylinder - for a particular position of the boom, the number of tracks sketched out by all the
read-write heads from a vertical cylinder.
 Seek Operation - the process of moving the boom to a new cylinder.
 Latency - current position to new position.
 Transmission Time - the record which is random size, must made to spin by the read-write
head. The total time taken to access a particular record is fraction of second.

Need for Disk Scheduling


 To minimize time spent seeking records it seems reasonable to order the request queue in
some manner other than (FIFO) first came first served.
 This process is called disk scheduling is sometimes viewed as the simplest disk schedule.
 FCFS random seek pattern. The number indicates order in which the requests arrived.
 A disk scheduler examines the positional relationship among waiting request.
 The request queue is then reordered.
 So that the request will be serviced with minimum mechanical moving under medium to
heavy loading condition scheduling is much better performance than FCFS.
Characteristics of Disk Scheduling
 Scheduling maximize the throughput (no of request serviced per unit time).
 Scheduling minimize the mean response time (average waiting time & average service time).
 Variance of response time (i.e. predictability) variance is a mathematical measure of how far
individual items tend to deviate from the average of the items.
Seek Optimization
1. First Come First Serve
 There is no reordering of the queue.
 In FCFS scheduling, the first request to arrive is the first one serviced.
 Once a request has arrived its place in the schedule is fixed.
 A request cannot be displayed because of the arrival of a higher priority request.
 When request are uniformly distributed over the disk surface FCFS results in a random seek
pattern.
 Disadvantage
o FCFS is acceptable when the load on disk is light.
o But as the load grows FCFS tends to saturate the device and response time become
large.
 Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67.

2. Shortest Seek Time First


 Disk arm is positioned next of the request inward or outward that minimized arm movement.
 In SSTF, the request that results in the shortest seek distanced is serviced next, even if the
request is not the first one in the queue.
 SSTF is a cylinder oriented scheme.
 In SSTF the innermost & outermost track receive poor service component with the mid-range
tracks.
 SSTF have better throughput rates than FCFS.
 SSTF is useful in batch processing system.
 Disadvantage
o Higher variances occur on response times.
 Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67.

3. SCAN Scheduling
 Disk arm sweeps back & forth across the disk surface, servicing all requests in its path.
 It changes direction only when there are no more requests to service in the current direction.
 The SCAN scheduling strategy to overcome the high variance in response time of SSTF.
 SCAN operates like SSTF except that it chooses the request that results in the shortest seek
distance in a preferred direction. It is sometimes called the “elevator algorithm”.
 Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The head is initially at cylinder number 53 moving towards larger cylinder numbers
on its servicing pass.
4. C - SCAN Scheduling
 Disk arm moves single directionally across disk surface forward the inner track.
 When there are no more requests for service its jumps back to service the request near outer
track and proceeds inward again.
 The arm moves from the outer cylinder to the inner cylinder servicing requests on a shortest
seek basis.
 When the arm has completed its inward sweep it jumps to the request nearest the outermost
cylinder and then resumes its inward sweep processing request.
 It has a very small variance in response time.
 Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The head is initially at cylinder number 53 moving towards larger cylinder numbers
on its servicing pass.

5. N - Step SCAN
 Disk arm sweeps back and forth as in SCAN but all request that arrive during a swap in the
same direction are batched and reordered for optimal service during the return sweep.
 On each sweep, the first N requests are serviced.
 N - Step SCAN offers good performance in throughput and mean response time.
 It avoids the possibility of indefinites postponement occurring if a large number of requests
arrive for the current cylinder.
 New requests are saved in a queue and servicing on the return sweep.
 Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 122, 14, 124, 65,
67 and 41 (is arrived when processing 122).
File and Database Systems
Introduction
 A file is a collection of data. It record on a secondary storage device such as disk or floppy.
 It may be manipulated as a unit by operation such as,
o Open - prepare a file to be referenced.
o Close - prevent further reference to a file until it is reopened.
o Create - built a new file.
o Destroy - remove a file.
o Copy - create another version of the file with new name.
o Rename - change the name of a file.
 Individual data items within the file may be manipulated by operation like,
o Read - input a data item to process from a file.
o Write - output a data item from a process to a file.
o Update - modify an existing data item in a file.
o Insert - hold a new data item to a file.
o Delete - remove a data item from a file.
o List – print or display the contents of a file.
 Files may be characterised by,
o Volatility - the frequency with which additions and deletions are made to a file.
o Activity - the percentage of a file records accessed during a given period of time.
o Size - this refers to the amount of information stored in the file.
o Location - the location of the file.
o Accessibility – restrictions placed on access to file data.
o Type - how the file data is used.
 File Systems Components are,
o Access Methods - which data stored in file is accessed.
o File Management - file to be stored, retrieve sharing and secure.
o Auxiliary Storage Management - allocating space for file on secondary storage.
o File Integrity Mechanism - the information in a file is uncorrupted.

Functions
 Users should be able to create, modify and delete files.
 Sharing of files should be taken carefully.
 Sharing has controlled access as like read, access, write access, execute or various
combination these.
 Structure of file for each application.
 User should be able to order the transferred information between files.
 Backup and recovery.
 Secure and private.
 Using symbolic names.
 Provide user friendly interface.
 User should not concern with the particular, devices on which data is stored.
File Organization
 This refers to the records of a file are arranged on secondary storage. The various schemes are
 Sequential
o The records are placed in sequential physical order. The next record follows previous
one. The devices magnetic tape, disk files are arranged in sequential order.
 Direct
o Records are directly accessed by their physical address on a direct access storage
device. Hashing techniques are used to locate direct access files user can place the
records in any order.
 Indexed Sequential
o Records are arranged in logical sequence according to a key content in each record. It
can be access either sequential (or) direct the use of indexed keys.
 Partition
o The file contains sequential of some files each sub-file is called a member. The
starting address of each file is stored in files directory. These are often used to store
program libraries or macro libraries.
Allocating and Freeing Space
 When the files are executed, the needed space is allocated and after the completion execution
that particular part of memory space is free in primary storage allocation.
 This will lead to a problem of fragmentation which can be avoided by performing periodic
compaction or garbage collection.
 Files may be reorganized to occupy adjacent areas of the disk and free area may be collected
into a single block or group of large blocks.
 Some system performs compaction dynamically.
 This may not useful for a single system with 100’s of users because long seeks may need for
the system switches between processors.
 Contiguous Allocation
o Files are assigned to contiguous areas of secondary storage.
o A user specifies in advance the size of area needed to hold a file should be created.
o If the size is not available then the file cannot be created.
 Advantages
o Successive logical records are physical adjacent to one another.
o So it speeds access compare to success logical records are disposed throughout the
disk.
o File directories are straight forward implement.
 Disadvantages
o As the files are deleted the space may not fit for new files.
o Adjacent storage holes must be combined.
o Periodic compaction may need.
 Non Contiguous Allocation
o Sector oriented linked allocation
o Block allocation
 Sector Oriented
o A disk is viewed as individual sectors belonging to common files contain pointer to
one another forming a linked list.
o A free space list contains entries for all from sectors on the disk.
o No need for compaction.
o To overcome the problem of allocation is takes long seeks to retrieval of logically
contiguous records.
o Pointer in the list structure reduces the amount of space available for file data.
 Block Allocation
o In this scheme instead of allocating individual sectors, blocks of contiguous sectors
are allocated.
o Each access to the file involves finding the appropriate block and the corresponding
sector between the block.
o There are three ways to implement the block allocation
i. Block Chaining
ii. Index Block Chaining
iii. File Oriented File Mapping
1. Block Chaining
o A block contains data block and pointer to the next block.
o Locating a particular record requires the block until the appropriate record found.
o But the chain must be searched from the beginning through a particular record found.
2. Index Block Chaining
o Pointers are placed into separate index blocks. Each entry contains a record, index
and a pointer that record.
o If more than one index block is needed to describe a file a series of index blocks are
chained together.
 Advantage
o Searching may take place in the index block themselves.
o Seek time is reduced.
o Index block get close together in secondary storage to minimize seeking.
o It rapid searching is needed the index block can be maintained in primary storage.
 Disadvantage
o Insertion can require the complete reconstruction of the index blocks, so some system
leave a certain portion of index blocks empty to provide features insertions.
3. File Oriented File Mapping
o Instead of using pointer the system uses in the block numbers.
o Each entry in the file map contains the block number of the next block in that file.
o Nil indicates that the last block of a file been reached.
o Free indicates that the block is available in the allocation.

File Descriptor
 It is also called as file control block.
 This contains information about the file regarding system needs.
 It has descriptions like,
o Symbolic file name
o Location
o File organization
o Device type
o Access control data
o Type (data file, obj pgm, source pgm, etc)
o Disposition
o Creation data & time
o Destroy date
o Date and time last modified
o Access activity counts
 File descriptors are maintained on secondary storage.
 It is controlled by operating system.
 The user may not reference it directly.
Access Control Matrix
 A 2D matrix is used to list the entire user and all the files in the system.
 Based on their access control matrix, if Aij = 1 user i, is allowed for accessing the file j, else
Aij = 0.
 This matrix is known as Access Control Matrix.

 Number of users increased number of files also increased.


 Then the matrix will be very large and very spare matrix.
 There is a need for coding to indicate various kinds of access such as read only, write only
execute only and read / write.
Nandha Arts and Science College, Erode-52
Department of Computer Applications
System Software and Operating System (43A)

UNIT – 1 (SYSTEM SOFTWARE)

1. ____________ translate mnemonic instructions into machine code.


a. Assembler b. Compiler c. Interpreter d. Translator
2. Most system software is machine ____________
a. Independent b. Dependent c. Oriented d. None of the above
3. SIC stands for ____________
a. Sophisticated Instructional Computer b. Simplified Instructional Computer
c. Simplified Information Computer d. Sophisticated Information Computer
4. XE stands for ____________
a. Excess Equipment b. Extra Evaluation
c. Extra Equipment d. Excess Evaluation
5. Memory consists of ____________ bytes.
a. 4 bit b. 8 bit c. 6 bit d. 2 bit
6. A total number of ____________ bytes in the computer memory.
a. 4096 b. 8192 c. 16384 d. 32768
7. There are ____________ registers in SIC machines.
a. Two b. Five c. Eight d. Ten
8. Each register is ____________ in length.
a. 8 bits b. 16 bits c. 24 bits d. 32 bits
9. Integers are stored as 24-bits ____________ number.
a. Binary b. Octal c. Hexadecimal d. Mnemonic
10. ____________ representation is used for negative value.
a. Signed Magnitude b. 1’s Complement
c. 2’s Complement d. Excess-3
11. Characters are stored using their ____________ codes.
a. 6 bit b. 7 bit c. 8 bit d. 16 bit
12. ____________ jumps to the subroutine, placing the return address in register L.
a. Jump b. Goto c. JSUB d. RSUB
13. ____________ returns by address to the address contained in register L.
a. Return b. Return To c. JSUB d. RSUB
14. TD stands for ____________
a. To Device b. Test Device c. Three Device d. Text Device
15. The maximum memory available on a SIC/XE system is ____________
a. 1 MB b. 1 GB c. 1 TB d. 1 PB
16. The ____________ is interpreted as a value between 0 and 1.
a. Fraction b. Exponent c. Flag d. Input
1
17. The _________ is interpreted as an unsigned binary number between 0 and 2047.
a. Fraction b. Exponent c. Flag d. Output
18. The standard version of the SIC machine uses only ____________ addressing.
a. Direct b. Indexed c. Program Counter d. Base
19. ____________ cannot be used with immediate or indirect addressing models.
a. Pointer b. Partition c. Indexing d. Redirecting
20. A ____________ is a system program that performed the loading function.
a. Linker b. Assembler c. Loader d. Compiler
21. The ____________ record is checked to verify that the correct program has been
presented for loading.
a. Define b. Refer c. Header d. Text
22. Loaders that allows for program relocation are called ____________ loader.
a. Relocating b. Bootstrap c. Absolute d. Simple
23. The ____________ record is the same as before except that there is a relocation bit
associated with each word of object code.
a. Define b. Refer c. Header d. Text
24. The relocation bit is gathered together into a ____________
a. Module b. Bit Mask c. Record d. Procedure
25. ____________ is the beginning address in memory where the linked program is to
be loaded.
a. PROGADDR b. CSADDR c. RETADDR d. ESTAB
26. ____________ contains the starting address assigned to the control section currently
begin scanned by the loader.
a. PROGADDR b. CSADDR c. RETADDR d. ESTAB
27. All external symbols are entered in ____________
a. PROGADDR b. CSADDR c. RETADDR d. ESTAB
28. ____________ means the routines are automatically retrieved from a library as they
are needed during linking.
a. Special Call b. Super Call c. Automatic Library Call d. No Call
29. Linking function is performed at execution time is called ____________ linking.
a. Static b. Compile Time c. Library d. Dynamic
30. The _______ perform linking operations before the program is loaded for execution.
a. Linking Loaders b. Linkage Editors c. Absolute Loaders d. Relative Loaders
31. ____________ linking is often used to allow several executing program to share one
copy of a subroutine or library.
a. Static b. Compile Time c. Library d. Dynamic
32. Dynamic linking function is also called ____________
a. Static Load b. Load-on-call c. Compile Time Load d. Library Load
33. ____________ will always contain the actual address of the next instruction.
a. Accumulator b. Base c. Index d. Program Counter
34. State any three types of records ____________
a. Header, Test, End b. Header, Text, End
c. Hyper, Test, End d. Hyper, Text, End
35. Control is simply passed from dynamic loader is called ____________
a. Routine b. Block c. Define d. Record

2
5 Marks

1. Discuss the basic function loader.


2. Write down the features of machine independent loader.
3. What is system software? What are its main components?
4. Summarize about absolute loader algorithm with example.
5. Write note on Pass 1 algorithm.
6. Write note on Pass 2 algorithm.
7. Explain about automatic library search in machine independent loader feature.

8 Marks

1. Discuss on the following:


(i) Linkage Editor.
(ii) Bootstrap Loader.
2. Write in detail about System software and machine architecture.
3. Discuss on the loader design options:
(i) Linkage Editor.
(ii) Dynamic linking
4. Explain in detailed about basic loader function with example.
5. Write about the architecture of SIC standard model.
6. Write about the architecture of SIC / XE model.
7. Discuss about machine dependent loader features
(i) Relocation
8. Discuss about machine dependent loader features
(i) Program Linking

UNIT – 2 (SYSTEM SOFTWARE)

1. A high-level programming language is usually described in terms of a __________


a. Pattern b. Procedure c. Blocks d. Grammar
2. The task of scanning the source statement, recognizing and classifying the various
tokens is known as ____________ Analysis.
a. Lexical b. Syntax c. Semantic d. Symmetric
3. Basic part of the compiler that performs the analytic function is called as
____________
a. Parser b. Linker c. Loader d. Scanner
4. A grammar for a programming language is a formal description of the
____________
a. Lexical b. Syntax c. Semantic d. Symmetric
5. ____________ is taken to mean any terminal symbol
a. Operand b. Operator c. Literal d. Constant
3
6. BNF stands for ____________
a. Block Naur Form b. Bunch Naur Form
c. Backups Naur Form d. Bulk Naur Form
7. The possible BNF grammar for a highly restricted subset of the ____________
language.
a. COBOL b. PASCAL c. PYTHON d. FORTRAN
8. Parsing techniques are divided into two general classes ____________
a. Bottom-up & Data-centric b. Top-down & Process-centric
c. Data-centric & Process-centric d. Top-down & Bottom-up
9. ____________ is used to keep a count of the number of items currently in the list.
a. Count b. Maximum c. List Count d. Capacity
10. The purpose of a ____________ is to translate programs written in high-level
programming languages into machine language.
a. Translator b. Interpreter c. Compiler d. Assembler
11. ____________ is normally done by considering an intermediate form of the
program being compiled.
a. Lexical Analysis b. Syntax Analysis
c. Semantic Analysis d. Code Optimization
12. The executable instructions of the program with a sequence of ____________
a. Statements b. Mnemonics c. Quadruples d. Blocks
13. A ____________ is a sequence of quadruples with one entry point.
a. Basic block b. Procedure c. Container d. Subroutines
14. A ____________ operation is also usually considered to begin a new basic block.
a. New b. Create c. Call d. Connect
15. A typical generation of machine code from these quadruples using only a
____________
a. Single Register b. Two Registers c. Many Registers d. Without register
16. If each integer variable occupies ____________ word of memory.
a. One b. Two c. Three d. Four
17. Most ____________ compilers store arrays in column-major order.
a. COBOL b. PASCAL c. PYTHON d. FORTRAN
18. One important source of code optimization is the ____________
a. Elimination of common sub expressions b. Elimination of blocks
c. Elimination of subroutines d. Elimination of procedures
19. Merging of the bodies of loops is called as ____________
a. Loop inverse b. Loop jamming c. Loop reverse d. Loop survey
20. The first program has been caused by the ____________
a. OS or Loader b. OS or Linker c. OS or Executer d. OS or Runner
21. The first word in an activation record would normally contain a pointer __________
a. BASE b. PREV c. NEXT d. LAST
22. The second word of the activation record contains a pointer ____________
a. BASE b. PREV c. NEXT d. LAST
23. One common method for providing access of variable surrounding blocks uses a
data structure called a ____________
a. Array b. Vector c. Display d. Showcase

4
24. The compiler was driven by the ____________ process.
a. Translating b. Parsing c. Asymmetric d. Symmetric
25. A ____________ was invoked as each language construct was recognized by the
parser.
a. Code generation routine b. Code adopting routine
c. Code reading routine d. Code addressing routine
26. ____________ processes a source program written in a high-level language just as
the compiler does.
a. Translator b. Assembler c. Interpreter d. Processor
27. An interpreter usually performs ____________ analysis functions.
a. Lexical & Syntactic b. Lexical & Symmetric
c. Syntactic & Symmetric d. Lexical & Semantic
28. P-code compiler is also called ____________ compilers.
a. Port code b. Parse code c. Byte code d. Simple code
29. A ____________ can be used without modification on a wide variety of system.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
30. A ____________ is software told that can be used to help in the task of compiler
construction.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
31. In some languages a program can be divided into units called ____________
a. Statements b. Mnemonics c. Quadruples d. Blocks
32. Lexical analyser is also known as ____________
a. Scanner b. Parser c. Code generator d. Code optimizer
33. Syntax analyser is also known as ____________
a. Scanner b. Parser c. Code generator d. Code optimizer
34. ____________ executed in very few times.
a. One pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
35. ____________ designed for single user micro computer system.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
36. Compiler-compiler also known as ____________
a. Compiler unit b. Translator writing system
c. Compiler generation d. Both b and c
37. The READ and WRITE statements are represented with a ___________ operation.
a. Call b. Convert c. Design d. Execute

5 Marks

1. Write the short notes on p-code compiler.


2. Discuss briefly about the machine independent code optimization.
3. Write short notes on phases of the compiler.
4. With about the functioning of one pass and multi pass compilers.
5
5. Write short notes on intermediate form of the program.
6. Elucidate about interpreters.
7. Explain about compilers-compilers.

8 Marks

1. Discuss in detail about


(i) Lexical analyzer
(ii) Code generation.
2. Give a detailed note on machine dependent compiler features.
3. What is an interpreter? Explain.
4. Explain about the structured variables in machine independent compiler features.
5. Discuss about p-code compiler in detail.
6. Discuss about compiler design options
(i) Division into passes
(ii) Interpreters
7. Discuss about compiler design options
(i) p-code compilers
(ii) Compilers-compilers
8. Write about machine independent compiler features:
(i) Storage allocation
9. Write about machine independent compiler features:
(i) Block structured languages

UNIT – 3 (OPERATING SYSTEM)

1. The term ___________ was first used by the designers of the Multics system in the
1960s.
a. Process b. Processor c. Operation d. Process States
2. Process is ___________
a. Program in execution b. Animated spirit of a procedure
c. Dispatchable unit d. All the above
3. A process should be divided into ___________ states.
a. Ready, Running & Blocked b. Create, Running & Blocked
c. Create, Ready & Running d. Ready, Running & Stopped
4. The ___________ sets a hardware interrupting clock or interval time to allow the
user to utilize the Processor.
a. Operating System b. System Module c. A Program d. A Switch
5. The manifestation of a process in an operating system is a ___________
a. Process Description b. Process Control Block (PCB)
c. Both a & b d. Process Matrix
6
6. The user to run for a specific time interval is also known as ___________
a. Time Slice b. Quantum c. Both a & b d. Random Time
7. The assignment of the CPU to the first process on the ready list is called
___________
a. Transferring b. Locating c. Dispatching d. Linking
8. ___________ is an event that alters the sequence in which a processor executes
instructions.
a. Interrupt b. Dispatcher c. Scheduler d. Monitor
9. An interrupt may be specifically initiated by running process called ___________
a. Trap b. Synchronous c. Both a & b d. Asynchronous
10. An interrupt may or may not be related to the running process called ___________
a. Asynchronous b. Synchronous c. Asymmetric d. Symmetric
11. SVC is stands for ___________
a. Supervisor Call Interrupt b. System Visit Call Interrupt
c. System Vision Call Interrupt d. System Visual Call Interrupt
12. ___________ instruction arrives from another processor on a multi processor
system will restart the system.
a. Signal Processor (SIGP) b. Start Processor
c. Stop Processor d. Restart Processor
13. ___________ caused by wide range of problems that may occur as a program’s
machine language instructions is executed.
a. Program Check Interrupt b. Machine Check Interrupt
c. Input Output Interrupt d. External Interrupt
14. The same amount of space divides main storage into portions called ___________
a. Storage Organization b. Partition c. Storage Management d. Segments
15. ___________ caused by malfunctioning hardware.
a. Program Check Interrupt b. Machine Check Interrupt
c. Input Output Interrupt d. External Interrupt
16. ___________ is defines the manner in which the main storage is viewed.
a. Storage Organization b. Partition c. Storage Management d. Processor
17. ___________ are concerned to obtain the next piece of program from main storage
to secondary storage.
a. Fetch b. Placement c. Replacement d. All the three
18. ___________ data is brought in to the main storage is referenced by a running
program.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
19. ___________ Strategies are concerned with determining in main storage to place
incoming programs.
a. Fetch b. Placement c. Replacement d. All the three
20. Today many researchers feel that ___________ will yield improved system
performance.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
21. ___________ program had to occupy a single contiguous block of storage location.
a. Contiguous storage allocation b. Non-contiguous storage allocation
c. Overlay storage allocation d. All the three

7
22. A program is divided into several blocks is called ___________
a. Segments b. Paging c. Partitions d. Blocks
23. Virtual storage system has obviated the need for programmer-controller
___________
a. Segments b. Paging c. Partitions d. Overlays
24. ___________ occurs in every computer system regardless of its storage
organizations.
a. Storage fragmentation b. Storage Overlay
c. Storage compaction d. None of the above

5 Marks

1. Write a brief note on functions of operating system.


2. Describe briefly about the Life cycle of process.
3. Make notes on history of DOS.
4. Discuss about interrupt processing.
5. Write note on overlay structure.
6. Elucidate the concept of protection in single user system.
7. Compare contiguous with non-contiguous storage allocation.

8 Marks

1. Discuss about variable partition multiprogramming.


2. Explain the concept of real storage management strategies.
3. Give a detailed note on single user contiguous storage allocation.
4. Discuss about fixed partition multiprogramming.
5. Elaborate the process state transition with neat diagram.

UNIT – 4 (OPERATING SYSTEM)

1. The page or segment should be brought from secondary to primary storage is called
___________ Strategy
a. Fetch b. Placement c. Replacement d. All the three
2. ___________ Strategy waits for a process to reference a page or segment before
loading it.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
3. ___________ Strategy attempt to determine what pages will be referenced for
process.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
4. ___________ Strategy determine where in primary storage to place an incoming
page or segment.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
8
5. Few placement strategies are ___________
a. First, Best, Worst & Buddy b. First, Best, Worst & Last
c. First, Second, Best & Worst d. First, Second, Best & Worst
6. The page or segment to remove from main memory to make more space is
___________ Strategy.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
7. Which of the following is/are replacement strategies ___________
a. First In First Out & Clock b. Second Chance & Random
c. Least Recently Used d. All of the above
8. Incoming page are placed in any available page frame so ___________ system are
trivializing the placement decision.
a. Paging b. Fragmentation c. Compaction d. Replacement
9. The process of which page in primary storage to displace (or) remove to make room
(or) space for an incoming page is ___________ Strategy.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
10. The page to replace is the one that will not be used again for the furthest tie into the
future is called ___________
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
11. The principle of optimality is called ___________
a. OPT or MIN b. Second Chance c. Trivial d. Non Trivial
12. ___________ Replacement selects any page or random page for replacement.
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
13. The random page replacement is rarely used ___________ approach.
a. Hit b. Miss c. Hit or Miss d. Static
14. ___________, choose the page that has been in storage the longest?
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
15. ___________ strategy is placed at the fail of the queue and pages are replaced from
the head of the queue.
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
16. ___________ Strategy selects that page for replacement that has not been used for
the longest time.
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
17. ___________ Strategy the cache block is removed whenever the cache is
overflowed?
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
18. In a referenced bit in LRU, the page has not been referenced s denoted as ________
a. Zero b. One c. Asterisk d. Hyphen
19. In a referenced bit in LRU, the page has been referenced is denoted as ___________
a. Zero b. One c. Asterisk d. Hyphen

9
20. The page to be replaced in least frequently used is called ___________
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
21. In a modified bit in LRU, the page has not been modified is denoted as _________
a. Zero b. One c. Asterisk d. Hyphen
22. In a modified bit in LRU, the page has been modified is denoted as ___________
a. Zero b. One c. Asterisk d. Hyphen
23. The modified bit in LRU, is often called the ___________
a. Dirty bit b. Refer bit c. Later bit d. Update bit
24. The CPU cannot be taken away from that process then the scheduling discipline is
called ___________
a. Preemptive b. Non Preemptive c. Static d. Dynamic
25. The CPU can be taken away from that process, then the scheduling discipline is
called ___________
a. Preemptive b. Non Preemptive c. Static d. Dynamic
26. ___________ does not change.
a. Preemptive b. Non Preemptive c. Static Priority d. Dynamic Priority
27. ___________ mechanisms are responsive to change.
a. Preemptive b. Non Preemptive c. Static Priority d. Dynamic Priority
28. A user with a rush job may be willing to pay a premium; the priority is called
___________
a. Purchased b. Assigned Priority c. Static Priority d. Dynamic Priority
29. Certain jobs are scheduled to be completed by a specific time is called
___________ Scheduling.
a. Static b. Dynamic c. Deadline d. Timing
30. The waste of storage due to excessively large tables is called ___________
a. Paging b. Segmentation c. Fragmentation d. None of the above
31. Real storage is normally divided into fixed-size page frames called ___________
system.
a. Paging b. Segmentation c. Fragmentation d. None of the above
32. Reducing the ___________ of a process's page waits is an important goal of storage
management strategies.
a. Space Time Product b. Space c. Time d. None of the three
33. The process of loading the page into memory is known as ___________
a. Demand Paging b. Segmentation c. Fragmentation d. Thrashing
34. The path of execution a program will take cannot be accurately predicted called
___________ Problem.
a. Exiting b. Halting c. Execution d. Path
35. The time between page faults, called the ___________
a. Inter fault time b. Inter page time c. Inter fetch time d. All the three
36. A view of program paging activity called working set theory of program behaviour
developed by ___________
a. Denning b. Ritchie c. Henry d. Bench
37. The program repeatedly requests pages from secondary storage are called ________
a. Demand Paging b. Segmentation c. Fragmentation d. Thrashing

10
5 Marks

1. Explain briefly about the working set model.


2. What are the basic concepts of virtual memory?
3. Discuss about virtual storage management strategies.
4. Compare and contrast preemptive with non- preemptive scheduling.
5. Write note on page size.
6. Compare paging with segmentation.
7. Short notes on scheduling objectives.
8. State the concepts of priorities.
9. Short notes on deadline scheduling.

8 Marks

1. Explain briefly about page replacement strategies.


2. Discuss in detailed about virtual storage management strategies.
3. Write short note on demand page memory management.
4. Explain about working sets in detail.
5. Discuss in detail about
(i) Paging
(ii) Segmentation
6. Elaborate job and processor scheduling.

UNIT – 5 (OPERATING SYSTEM)

1. Data is recorded on a series of ___________


a. Magnetic disks b. Platters c. Magnetic disks & Platters d. Hard disk
2. Disks are connected by a common ___________ that spins at very high speed.
a. Moving arm b. Boom c. Spindle d. Head
3. ___________ can access only data immediately adjacent to it.
a. Read head b. Write head c. Read/Write head d. Head
4. The time takes for data to rotate from its current position to a position adjacent to
the read-write head is called ___________
a. Latency time b. Transaction time c. Read time d. Writ time
5. The process of moving the boom to a new cylinder is called ___________
a. Read b. Write c. Seek d. Load
6. A particular position of the boom the set of track sketched out by all the read-write
heads forms a vertical ___________
a. Sketch b. Latency c. Selection d. Cylinder
7. ___________ are concerned with the manner in which data store in files.
a. Create b. Delete c. Access d. Remove
11
8. First Come First Serve exhibits a ___________ Pattern in which successive requests
can cause time consuming seeks from the inner most to the outermost cylinder.
a. Static Seek b. Random Seek c. Special Seek d. Purchased Seek
9. ___________ will not perform reordering process of the queue.
a. First Come First Serve b. Shortest Seek Time First
c. SCAN d. N Step SCAN
10. ___________ disk arm is positioned next at the request that minimizes arm
movement.
a. First Come First Serve b. Shortest Seek Time First
c. SCAN d. N Step SCAN
11. One simple modification to the basic scan strategy is called ___________
a. First Come First Serve b. Shortest Seek Time First
c. SCAN d. N Step SCAN
12. ___________ is concerned with providing the mechanism for files to be stored
referred shared and secured.
a. File management b. Storage management
c. Memory management d. All the three
13. ___________ concerned with allocating space for files on secondary storage device.
a. File management b. Primary storage management
c. Auxiliary storage management d. All the three
14. ___________ is concerned with guaranteeing that the information in file is
uncorrupted.
a. File integrity b. Information integrity c. Data integrity d. All the three
15. ___________ Organization refers to the manner in which the records of a file are
arranged secondary storage.
a. File b. Data c. Record d. Information
16. Records are directly accessed records by their physical address on a ___________
storage device.
a. Sequential b. Direct c. Indexed Sequential d. Partitioned
17. ___________ is a file of sequential sub-files.
a. Sequential b. Direct c. Indexed Sequential d. Partitioned
18. Sequential of sub file is called a ___________
a. Member b. Program c. Module d. Index
19. The starting address of each member is stored in the ___________ Directory.
a. File b. Member c. Module d. All the three
20. ___________ files are assigned to sequential areas of secondary storage.
a. Contiguous storage allocation b. Non-contiguous storage allocation
c. Overlay storage allocation d. All the three
21. A ___________ is a control block containing information the system needs to
manage a file.
a. File Descriptor b. File Control Block (FCB)
c. Both a & b d. File Control Matrix (FCM)

12
22. A ___________ contains entries for all free sectors the disk.
a. Free space list b. Used space list c. Space list d. None of the above
23. One way to control access to file is to create a two-dimensional ___________
a. Process Description b. Process Control Block (PCB)
c. File Descriptor d. Access Control Matrix

5 Marks

1. Describe the characteristic of moving head-disk storage.


2. Write brief note on file descriptor.
3. Mention the manipulation of the file system.
4. What is the need for disk scheduling? Explain.
5. Narrate about access control matrix and file descriptor
6. Write about operation of moving head disk storage.
7. Discuss the following
(i) File characteristics
(ii) File components
8. Showcase the various schemes in file organization.
9. Write notes on allocating, freeing space and contiguous allocation.

8 Marks

1. Elaborate the concept of file system.


2. Write in detailed about access control matrix.
3. Write detailed note on File system implementation.
4. Discuss the following seek optimization techniques
(i) FCFS
(ii) SSTF
(iii) SCAN
5. Functions of file systems.
6. Discuss the following seek optimization techniques
(i) N Step SCAN
(ii) C- SCAN
(iii) Eschenbach scheme
7. Discuss the following block allocation techniques
(i) Block chaining
(ii) Index block chaining
(iii) File oriented file mapping

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy