System Software & Operating System
System Software & Operating System
Introduction – System Software and machine architecture. Loader and Linkers: Basic
Loader Functions – Machine dependent loader features – Machine independent loader features –
Loader design options.
Unit – 2: MACHINEANDCOMPILER
Unit – 3: OPERATINGSYSTEM
Software
Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. It is the opposite of hardware, which describes the physical aspects of a computer.
Software is a generic term used to refer to applications, scripts and programs that run on a
device.
Classifications of Software
1. Freeware
Freeware software is available without any cost. Any user can download it from the
internet and use it without paying any fee. However, freeware does not provide any liberty for
modifying the software or charging a fee for its distribution.
Examples are Adobe Reader, Audacity, ImgBurn, Recuva, Skype, Team Viewer and Yahoo
Messenger.
2. Shareware
It is software that is freely distributed to users on a trial basis. It usually comes with a
time limit and when the time limit expires, the user is asked to pay for the continued services.
There are various types of shareware like Adware, Donationware, Nagware, Freemium, and
Demoware (Crippleware and Trialware).
Some examples of shareware are Adobe Acrobat, Getright, PHP Debugger and Winzip.
These kinds of software are available to users with the source code which means that a
user can freely distribute and modify the software and add additional features to the software.
Open-Source software can either be free or chargeable.
Some examples of open-source software are Apache Web Server, GNU Compiler
Collection, Moodle, Mozilla Firefox and Thunderbird.
They are also known as Closed-source software. These types of applications are usually
paid and have intellectual property rights or patents over the source code. The use of these is
very restricted and usually, the source code is preserved and kept as a secret.
Some examples of closed source software are Skype, Google earth, Java, Adobe Flash,
Virtual Box, Adobe Reader, Microsoft office, Microsoft Windows, WinRAR, mac OS and
Adobe Flash Player.
Types of Software
1. Application Software
2. System Software
They consist of a variety of programs that support the operation of a computer. System
software supports operation and use of computer. Examples for System Software are Operating
System, Compiler, Assembler, Macro Processor, Loader or Linker, Debugger, Text Editor and
Software Engineering Tools.
Examples for system software are Operating system, compiler, assembler, macro
processor, loader or linker, debugger, text editor, database management systems (some of them)
and, software engineering tools. These software’s make it possible for the user to focus on an
application or other problem to be solved, without needing to know the details of how the
machine works internally.
One characteristic in which most system software differs from application software is
machine dependency.
System software supports operation and use of computer. Application software provides
solution to a problem. Assembler translates mnemonic instructions into machine code. The
instruction formats, addressing modes etc., are of direct concern in assembler design. Similarly,
Compilers must generate machine language code, taking into account such hardware
characteristics as the number and type of registers and the machine instructions available.
Operating systems are directly concerned with the management of nearly all of the resources of a
computing system.
There are aspects of system software that do not directly depend upon the type of
computing system, general design and logic of an assembler, general design and logic of a
compiler and code optimization techniques, which are independent of target machines. Likewise,
the process of linking together independently assembled subprograms does not usually depend
on the computer being used.
This machine has been designed to illustrate the most commonly encountered hardware
features and concepts, while avoiding most of the peculiarity that are often found in real
machines.
The two versions have been designed to be upward compatible – that is, an object
program for the standard SIC machine will also execute properly on a SIC/XE system.
SIC MACHINE ARCHITECTURE
MEMORY
• Memory consists of 8-bit bytes, any 3 consecutive bytes form a word.(24 bits)
• All addresses on SIC are byte addresses, words are addressed by the location of their
lowest numbered byte.
• There are a total of 32,768 bytes in the computer memory.
REGISTERS
• There are 5 registers, all of which have special uses.
• Each register is 24 bits in length.
DATA FORMATS
• Integers are stored as 24-bit binary numbers, 2’s complement representation is used for
negative values.
• Characters are stored using their 8-bit ASCII codes.
• There is no floating-point hardware on the standard version of SIC.
INSTRUCTION FORMATS
• All machine instructions on the standard version of SIC have the following 24-bit format:
• The flag bit x is used to indicate indexed-addressing mode.
8 1 15
Opcode X Address
ADDRESSING MODES
2
1. Direct addressing mode
LDA TEN
STCH BUFFER, X
INSTRUCTION SET
• This includes instruction that load and store register.
LDA – load accumulator
LDX – load index register
STA – store accumulator
STX – store index register
• It also includes integer arithmetic instructions ADD, SUB, MUL, DIV.
• All arithmetic operations involve register A and a word in memory, with the result being
left in the register.
• It also includes an instruction COMP that compares the value in register A with a word in
memory.
• It also includes jump instructions like,
JLT - less than
JEQ – equal
JGT – greater than.
• The two instructions are provided for subroutine linkage. They are,
1. JSUB – jumps to subroutine
2. RSUB – returns to subroutine.
3
1. Test Device(TD):
• This instruction tests whether the addressed device is ready to send or receive
a byte of data.
• The condition code is set to indicate the result of this test.
• A setting of < means the device is ready to send or receive and + means the
device is not ready)
2. Read Data(RD)
3. Write Data(WD)
• A program needing to transfer data or receive, wait until the device is ready, then execute
a read data or write data.
MEMORY
• The maximum memory available is 1 megabyte.
• This increase leads to a change in instruction formats and addressing modes.
REGISTERS
DATA FORMATS
• The data format is same as standard SIC version.
• In addition is a 48 bit floating-point data type and the format is
1 11 36
S exponent Fraction
• The fraction lies between 0 and 1.
• The exponent is an unsigned binary number lies between 0 and 2047.
• There is a 48-bit floating-point data type, F*2(e-1024)
Instruction Formats:
• The new set of instruction formats from SIC/XE machine architecture is as follows.
• Format 1 (1 byte): contains only operation code (straight from table).
8
OP
• Format 2 (2 bytes): first eight bits for operation code, next four for register 1 and
following four for register 2.
8 4 4
OP R1 R2
4
• Format 3 (3 bytes): First 6 bits contain operation code, next 6 bits contain flags, last 12
bits contain displacement for the address of the operand. Operation code uses only 6 bits,
thus the second hex digit will be affected by the values of the first two flags (n and i). The
flags, in order, are: n, i, x, b, p, and e. The last flag e indicates the instruction format.
6 1 1 1 1 1 1 12
OP n i x b p e disp
• Format 4 (4 bytes): same as format 3 with an extra 2 hex digits (8 bits) for addresses
that require more than 12 bits to be represented.
6 1 1 1 1 1 1 20
OP n i x b p e Disp
Addressing mode
• Two new relative addressing modes are available for use with instructions assembled
using format 3.
Instruction Set
• SIC/XE provides all of the instructions that are available on the standard version.
• In addition we have, Instructions to load and store the new registers LDB, STB, etc,
• Floating- point arithmetic operations, ADDF, SUBF, MULF, DIVF, Register move
instruction : RMO Register-to-register arithmetic operations, ADDR, SUBR, MULR,
DIVR and, Supervisor call instruction : SVC generates an interrupt that can be used for
communication with the OS.
• Loading which brings the object program into memory for execution.
• Relocation which modifies the object program so that it can be loaded at an address.
• Linking which combines two or more separate object programs and supplies the
information needed to allow references between them.
LOADRES AND LINKERS
• The most fundamental function of a loader is bringing an object program into memory
and starting its execution.
Design of an absolute loader
• This loader does not need to perform functions like linking and program relocation.
• All operations are done in a single pass.
• The header record is checked to verify that the correct program has been presented for
loading.
• When each text record is read, the object code from the test record is moved to the
indicated address in memory.
• When the end record is encountered the loader jumps to the specified address to begin
execution of the loader program.
6
• The above figure shows a representation of the program from figure (a) after loading,
• The contents of memory locations for which there is no text record are shown as xxx.
• Although the above process is extremely simple, we have to consider the following
points.
1. Our object program is stored in hexadecimal format (ie) each byte of assembled code
is given using its hexadecimal representation in character form.
Ex: STL instruction is represented by pair of characters “1” & “4”. When loader read
this instructions it occupy two bytes of memory. During loading, it is stored as a
single byte with hexadecimal value 14.
2. The above method of representing an object program is inefficient in terms of both
space and execution time.
3. Therefore, most machines store object program in a binary form.
7
• The below source code is divided into 3 sections.
1. Header section.
2. Loop.
3. Subroutine - GETC
• The bootstrap reads object code from device F1 and enters into memory starting at
address 80.
• After all the code from device F1 has been entered into memory, the bootstrap executes a
jump to address 80 to begin execution of the program just loaded.
1. Header section:
• The bootstrap itself begins at address 0 in the memory of the machine.
• It loads the operating system starting at address 80 (Hexadecimal).
2. Loop section:
• The bootstrap reads object code from device F1 and enter into memory starting at
address 80.
• After all the object code from device F1 has been loaded, the bootstrap execute a
jump to address 80 to begin execution of the program that was loaded.
• Register X contain the address of the next memory location is loaded.
3. Subroutine – GETC:
• GETC is used to reads one character from device F1 and converts it from the
ASCII character code to value of the hexadecimal digit.
• Ex: The ASCII code for the character “0” (hexadecimal 30) is converted to the
numeric value 0.
• Likewise, ASCII codes for “1” through “9” (hexadecimal 31 through 39) are
converted to the numeric values 1 through 9 and the codes for “A” through “F”
(hexadecimal 41 through 46) are converted to the values 10 through 15.
• The subroutine GETC jumps to address 80 when an end-of-file (hexadecimal 04)
is read from device F1.
8
BOOT START 0 BOOTSTRAP LOADER FOR SIC/XE
.
.
9
MACHINE DEPENDENT LOADER FEATURES
• In this section we consider the design and implementation of a more complex loader that
is used on a SIC/XE version.
• This loader provides for program relocation and linking and also for the simple loading
function.
RELOCATION
• The need for program relocation is an indirect reason for the change to larger and more
powerful computer.
• The way relocation is implemented in a loader is also dependent upon machine
characteristics.
• Loaders that allow for program relocation are called relocating loaders or relative loaders.
• The two methods for specifying relocation are:
1. Relocation by modification records.
2. Relocation by bit mask
Relocation by modification records
• A modification record is used to describe each part of the object that must be changed
when the program is relocated.
Modification record
col 1: M
col 2-7: Starting address of the field to be modified, relative to the beginning of the
control section (hexadecimal).
col 8-9: Length of the field to be modified, in half bytes (hexadecimal)
col 10: Modification flag (+/-)
col 11-17: External symbol whose value is to be added to or subtracted from the indicated
field.
• The following SIC/XE program is used for specifying relocation.
Line loc Source Statement object code
5 0000 COPY START 0
10 0000 FIRST RETADR 17202D
. . .
. . .
15 0006 CLOOP +JSUB RDREC 4B101036
. . .
. . .
35 0013 +JSUB WRREC 4B10105D
. . .
. . .
65 0026 +JSUB WRREC 4B10105D
. . .
. . .
115 SUBROUTINE TO READ RECORD INTO BUFFER
. . .
125 1036 RDREC CLEAR X B410
. . .
. . .
200 SUBROUTINE TO WRITE RECORD INTO BUFFER
. . .
. . .
210 105D WRREC CLEAR X B410
10
• Most of the instruction in the above program use relative or immediate addressing.
• The instruction on lines 15, 35, 65 contains actual addresses (instructions are extended
format) whose values are affected by relocation.
• The following is an object program corresponding to the above source program.
• There is one modification record for each instruction that must be changed during
execution. ( 3 modification record for instruction in line 15, 35, 65).
• Each modification record specifies the starting address and length of the filed whose
value is to be altered.
• In the above example, all modifications add the value of the symbol COPY, which
represents the starting address of the program.
Drawbacks:
• Relocation by bitmask technique is mainly used to a machine that primarily uses direct
addressing and has a fixed instruction format.
• The standard SIC program is used for this method.
• The below figure shows the object program with relocation by bitmask.
11
• Here, there is no modification records.
• Text record contain relocation bit associated with each word of object code.
• All SIC instruction occupy one word.
• So one relocation bit for each possible instruction.
• The relocation bits are gathered together into a bit mask.
• In the above figure mask is represented in character form as three hexadecimal digits.
• These characters are underlined.
• A bit value of 0 indicates that no modification is necessary.
• A bit value of 1 indicates, the programs starting address is to be added to the instruction
when the program is relocated.
• In the above example, the bit mask FFC in the first Text Record specifies that all 10
words (instruction) of object code are modified during relocation.
• The mask E00 in the second test record specifies that the first three words are to be
modified.
FFC – 1111 1111 1100 – total 10 words
E00 – 1110 00 – total 3 words
• The other text record follows the same pattern.
PROGRAM LINKING
• In this section we are going to see complex examples of external references between
programs and examine the relationship between relocation and linking
• Consider the following 3 separate program each consists of a single control section.
12
Loc Source statement Object code
0000 PROGA START 0
EXTDEF LISTA, ENDA
EXTREF LISTB, ENDB, LISTC,ENDC
.
.
0020 REF1 LDA LISTA 03201D
0023 REF2 +LDT LISTB+4 77100004
0027 REF3 LDX #ENDA-LISTA
.
.
0040 LISTA EQU *
.
.
0054 ENDA EQU *
END REF1.
13
Loc Source statement Object code
0000 PROGB START 0
EXTDEF LISTA, ENDB
EXTREF LISTA, ENDA, LISTC,ENDC
.
.
0020 REF1 LDA LISTA 03100000
0023 REF2 +LDT LISTB+4 772027
0027 REF3 LDX #ENDA-LISTA 05100000
.
.
0060 LISTB EQU *
.
.
0070 ENDB EQU *
END REF2.
14
2. Take REF2 & REF3:
• In PROGA, REF2 & REF3 refers to external symbol.
• Modification and linking necessary.
• In PROGB, REF2 refers to local reference.
3. No modification and linking:
• In PROGB, REF1 & REF3 refers to external symbols.
• In PROGC, REF3 is a immediate operand whose value is to be the difference
between ENDA & LISTA.
PROGADDR:
• It is the beginning address in memory where the linked program is to be loaded.
• Its value is supplied to the loader by the operating system.
CSADDR:
• It contains starting address assigned to the control section currently being scanned by the
loader.
• This address is added to all relative addresses within the control section to convert them
to actual addresses.
15
PASS 1 ALGORITHM:
Begin
get PROGADDR from operating system
set CSADDR to PROOADDR {for first control section}
while not end of input do
begin
read next input record {Header record for control section}
set CSLTH to control section length
search ESTAB for control section name
if found then
set error flag {duplicate external symbol}
else
enter control section name into ESTAB with value CSADDR
while record type ~ 'E' do
begin
read next input record
if record type = 'D' then
for each symbol in the record do
begin
search ESTAB for symbol name
if found then
set error flag (duplicate external symbol)
else
enter symbol into ESTAB with value(CSADDR +
indicated address)
end {for}
end {while ~ 'E'}
add CSLTH to CSADDR {starting address for next control section}
end {while not EOF}
end {Pass 1}
• During pass1 the loader use only Header & Define record types in the control sections.
• The beginning address (PROGADDR) becomes the starting address (CSADDR) for the
first control section in the input sequence.
• The control section name from the header record and all external symbols from the define
record are entered into ESTAB.
• When the End record is read, the control section length CSLTH is added to CSADDR.
• This calculation gives the starting address for the next control section.
16
PASS 2 ALGORITHM:
Begin
set CSADDR to PROOADDR
set EXECADDR to PROOADDR
while not end of input do
begin
read next input record {Header record}
set CSLTH to control section length
while record type != 'E' do
begin
read next input record
if record type = 'T' then
begin
{if object code is in character form, convert
into internal representation}
move object code from record to location
(CSADDR + specified address)
end {if 'T'}
else if record type = 'M' then
begin
search ESTAB for modifying symbol name
if found then
add or subtract symbol value at location
(CSADDR + specified address)
Else
set error flag (undefined external symbol )
end {if 'M' }
end {while != 'E'}
if an address is specified {in End record} then
set EXECADDR to (CSADDR + specified address)
add CSLTH to CSADDR
end {while not EOF}
jump to location given by EXECADDR {to start execution of loadedprogram)
end {Pass 2}
• Pass2 of the loader performs the actual loading, relocation and linking of the program.
• As each text record is read, the object code is moved to the specified address.
• When a modification record is encountered, the symbol whose value is to be used for
modification is looked up in ESTAB.
• This value is then added to or subtracted from the indicated location in memory.
• The last step performed by the loader is that transferring of control to the loaded program
to begin execution.
17
MACHINE INDEPENDENT LOADER FEATURES
AUTOMATIC LIBRARY SEARCH
• An automatic library search process is used for handling external references.
• This feature allows a programmer to use standard subroutines without explicitly
including them in the program to be loaded.
• The subroutines are automatically retrieved from a library as they are needed during
linking.
Automatic library call
• The subroutines called by the loaded program are automatically taken from the library
and linked with the main program and loaded.
• The programmer does not need to take any action but he has to mention the subroutine
names as external references in the source program.
• This feature is referred to as automatic library call.
Handling external references
• Linking loaders that support automatic library search must take care of undefined
external symbols that are referred.
• In the following ways, loaders can handle external references.
1. Enter the symbols from refer record into the symbol table (ESTAB) unless these
symbols are already present.
2. Undefined symbols are marked and when the definition is encountered, these symbols
are filled.
3. At the end of pass1, the symbols in ESTAB that remain undefined indicates
unresolved external references.
4. The loader searches the libraries that contain the definition of these unresolved
symbols and processes the subroutines found by this search.
5. The subroutines taken from a library in this way it may themselves contain external
references. So, it is necessary to repeat the library search process until all references
are resolved.
6. After the library search the remaining unresolved external references are treated as
errors.
Search of libraries using file structure
• The libraries to be searched by the loader mainly contain assembled or compiled version
of the subroutines (ie object programs).
1. A special file structure is used for the library search.
2. This structure contains a directory.
3. A directory contain the name of each subroutine and a pointer to the subroutine’s
address.
4. The library using the above file structure involves a search of the directory and read
the object program (subroutines).
LOADER OPTIONS
• Many loaders have a special command language that is used to specify options.
• The following are some of the loader options that can be selected at the time of loading &
linking.
18
SPECIFYING ALTERNATIVE SOURCES OF INPUTS
• This loader option allows the selection of alternative sources of input.
Ex: INCLUDE program-name (library-name)
• The above command will make the loader to read the given object program from a library
and treat it as a part of the primary loader input.
CHANGING OR DELETING EXTERNAL REFERENCES
• Using the option it is also possible to change external references within the programs
being loaded or linked.
Ex: CHANGE name1, name2.
• The above command will change the external symbol name1 to name2.
• Some options allow the user to delete external symbols or entire control sections.
Ex: DELETE CS-name.
• The above command will delete the control section cs-name from the loaded program.
Ex: INCLUDE READ (UTLIB)
INCLUDE WRITE (UTLIB)
DELETE RDREC, WRREC
CHANGE RDREC, READ
CHANGE WRREC, WRITE
1. The above commands make the loader to include control sections READ and WRITE
from the library UTLIB.
2. And delete the control sections RDREC and WRREC from the load.
3. The first CHANGE command will cause all external references to symbol RDREC
will be changed to refer to symbol READ.
4. Similarly, references to WRREC will be changed to WRITE.
CONTROLLING AUTOMATIC PROCESSING OF EXTERNAL REFERENCES
• The common loader option involves the automatic inclusion of library subroutines to
specify external references.
• Most loader allows the user to specify alternative libraries to be searched.
Ex: LIBRARY MYLIB
• The above mentioned user-defined libraries are normally searched before the standard
system libraries.
• The unresolved symbols in the library search can be specified in the following way.
Ex: NOCALL STDDEV, CORREL
• The above command instruct the loader that these external references are remain
unresolved.
LOADER DESIGN OPTIONS
• Two alternative design options for linking loaders are:
1. Linkage editors
2. Dynamic linking
LINKAGE EDITORS
• It is found on many computing system instead of or in addition to the linking loader.
• It performs linking prior to load time.
• A linkage editor produces a linked version of the program which is written to a file or
library instead of being immediately loaded into memory.
• A linked program is also called as load module or an executable image.
19
• When the user is ready to run the linked program, a simple relocating loader can be used,
to load the program into memory.
DIFFERENCE BETWEEN LINKAGE EDITOR & A LINKING LOADER
LOADED PROGRAM
• Linked program produce by the linkage editor is processed by a relocating loader.
• All external references are resolved and relocation is indicated by some methods such as
modification records or bit mask.
• Even though all linking has been performed, information about external references is
often retained in the linked program.
• This allows relinking of the program to replace control sections, modify external
references etc.,
20
FUNCTIONS OF LINKAGE EDITORS
• Linkage editor can perform many useful functions using editor commands. They are:
• Assume that a program (PLANNER) that uses many subroutines.
• One of the subroutine (PROJECT) has to be change to new version.
• After the new version of PROJECT is assembled or compiled, the linkage editor is used
to replace this subroutine in the program (PLANNER).
• The following linkage editor commands are used to perform the above work.
INCLUDE PLANNER (PROGLIB)
DELETE PROJECT {Delete from existing planner}
INCLUDE PROJECT (NEWLIB) {Include new version}
REPLACE PLANNER (PROGLIB)
• Linkage editor is also used to build packages of subroutines or other control sections that
are generally used.
• It combines the related subroutines into a package using editor commands.
Ex:
INCLUDE BLOCK (FTNLIB)
INCLUDE DEBLOCK (FTNLIB)
INCLUDE ENCODE (FTNLIB)
INCLUDE DECODE (FTNLIB)
.
.
SAVE FTN10 (SUBLIB)
• In the above command sequence, all the subroutines are linked into a module named
FTN10.
• This module is available in the directory SUBLIB.
• A search of SUBLIB, will search FTN10 instead of the separate routines.
• This method saves search time.
DYNAMIC LINKING
• Sometimes loading and linking of subroutine to the program will occur when it is first
called.
• Here, linking function is postponed until execution time.
• This type of function is called as dynamic linking or dynamic loading or load on call.
• Loading & linking of a subroutine using dynamic linking
• The following figure shows a method in which subroutines that are dynamically loaded
must be called through an operating system service request.
21
• In the above figure, the user program makes a load-and-call service request to the
operating system.
• Then the OS checks its internal tables to determine whether the subroutine is loaded or
not.
• If the subroutine is not loaded, then it is loaded from the system libraries as shown in the
below figure.
22
• Control is then passed from the dynamic loader to the subroutine being called.
• When the called subroutine compiles its processing, it returns to its caller.
• The OS then returns control to the user program that made the request.
• After the subroutine is completed, the memory that was allocated for subroutine is retain
for later use as long as the storage space is not needed for other processing.
23
• If a subroutine is still in memory, a second call request is not require for another load
operation.
• Here control is simply passed from the dynamic loader to the called routine.
• If the question, how is the loader itself loaded into the memory? is asked, then the answer
is, when computer is started – with no program in memory, a program present in ROM
( absolute address) can be made executed – may be OS itself or A Bootstrap loader,
which in turn loads OS and prepares it for execution.
• The first record ( or records) is generally referred to as a bootstrap loader – makes the
OS to be loaded.
• Such a loader is added to the beginning of all object programs that are to be loaded into
an empty and idle system.
• On some computers, as absolute program is permanently resident in a read-only memory.
• When some hardware signal occurs, the machine begins to execute this ROM program.
24
• On some computers, the program is executed directly in ROM, on others, the program is
copied from ROM to main memory and executed other.
• Some machines do not have such read-only storage.
• If the loading process requires more instructions than can be read in a single record, this
first record causes the reading of others, and these in turn can cause the reading of still
more records-hence the term bootstrap.
• The first record is generally referred to as a bootstrap loader.
UNIT II
INTRODUCTION TO COMPILERS
TRANSLATOR
• Translator is a program that takes as input a program in one programming language &
produces as output a program in another language
Need for Translator
• Machine language is complex to be learnt so a translator becomes vital
COMPILER
• It is a translator, which takes the program in high level language wholly & converts it to
machine language.
Steps involved in executing a program written in a high level programming language
1. Source program is compiled
2. Translated into object program
3. Resulting object program is loaded into memory & executed.
INTERPRETER
• It is a translator program, which converts the high level language program to its machine
equivalent line by line.
• The execution of the interpreted program is very slow.
DIFFERENT PHASES OF A COMPILER
25
LEXICAL ANALYSIS
• In lexical analysis the lexical analyzer or scanner reads the source program and separate it
into tokens.
• It is the first phase and it is called scanner
• The usual tokens are:
1. Keyword: such as DO or IF.
2. Identifiers: such as x or num.
3. Operator symbols: such as <, =, or, +, and
4. Punctuation symbols: such as parentheses or commas.
• The output of the lexical analysis is a stream of tokens, which is passed to thenext
phase; the syntax analyzer or parser.
• The parser asks the lexical analyzer for the next token whenever it needs.
Syntax analysis
• In syntax analysis, the syntax analyzer or parser groups tokens into syntactic structure
• For example, three tokens A, + and B can be grouped to A+B to get a syntactic structure.
• Expression might further be combined to form statements.
• If the token is an identifier the type of the identifier is entered into the symbol table by
the syntax analyzer.
• It checks if the token occur in patterns that are permitted by the specification of the
source language.
• On seeing the invalid syntax the parser detects the error situation.
• For Eg: if the program has an expression
A+/B
• On seeing the “/” the syntax analyzer will detect an error situation.
Output of syntax analyzer is a parse tree
• For Ex: The expression
A:=B+C
• Can be represented using the following parser tree
Assignment
statement
expression
identifier
:=
A
expression expression expression
identifier identifier
+
B C
26
Intermediate code generation
• The intermediate code generator transform the parse tree into an intermediate language
representation of source program.
• The preceding parse tree can be converted into the three address which follows:
T1=B+C
A=T1
Where T1 is a temporary variable.
Code optimization
• It is designed to improve the intermediate code, so that the final object program runs
faster and takes less space.
• Output of code optimizer is another intermediate code which does the same job as the
previous intermediate code, but with much efficiency.
• Thus the code optimizer would optimize the preceding 3 address code as A=B+C
1. Optimization compiler
• Object program that is frequently executed should be fast & small
optimization compiler attempts to produce a better target program than would
be produced with no optimization
2. Local optimization
• Local transformation can be applied to a program
• Ex: The statements
If A>B goto L2
Goto L3
L2
Can be replaced by
If A<=B goto L3
3. Elimination of common sub-expression
• Common sub-expression may be eliminated from the program
• For Ex: Consider the following sequence of statements:
A=B+C+D
E=B+C+F
Which can be evaluated as
T1=B+C
A=T1+d
E=T1+F
4. Loop optimization
• Speed-ups of loops should be considered computations that do not vary, every
time the loop is entered can be removed out of the loop to increase the speed
of execution.
• Eg:
for(i=1; i<10; i++)
{
m=1;
.
.
}
27
Can be replace by
m=1;
for(i=1; i<10; i++)
{
.
.
}
Code generation
• Code generator generates the object code.
Function of code generator
1. Selects code
2. Selects registers
Main responsibility of the code generator
• Code generation phase converts the intermediate code into a sequence of target code.
Semantic analysis
• It can be done during the syntax analysis or intermediate code generation or final code
generation phase.
• It analyses if the statements are meaningful
Function
• It is used to determine if the type of intermediate result is legal.
29
Advantages
1. The main advantage of this approach is portability of software. It is not necessary for the
compiler to generate different code for different computers, because the p-code object
programs can be executed on any machine that has a p-code interpreter.
2. A p-code compiler can be used without modification on a wide variety of systems if a p-
code interpreter is written for each different machine.
3. The p-code object program is often much smaller than a corresponding machine code
program would be. This is particularly useful on machines with limited memory size.
Problem
• The execution of a p-code program may be much slower than the execution of the
equivalent machine code.
• Depending upon the environment however, this may not be a problem.
Solution
1. Many p-code compilers are designed for a single user running on a microcomputer
system. In that case, speed of execution may be relatively insignificant.
2. If execution speed is important, some p-code compilers support the use of machine
language subroutines.
• By rewriting a small number of commonly used routines in machine language, it is often
possible to achieve some improvements in performance.
Compiler-compilers
• A compiler-compilers is a software tool that can be used to help in the task of compiler
construction.
• Such tools are often called compiler generators or translator writing system.
30
Automated compiler construction using a compiler-compiler
1. The user (ie, the compiler writer ) provides description of the language to be translated.
2. This description may consists of a set of lexical rules for defining tokens & a grammar
for the source language.
3. Some compiler-compilers use this information to generate a scanner & a parser directly.
4. In addition to the description of the source language, the user provides a set of semantic
or code-generation routines.
5. The routine is called by the parser each time it recognizes the language construction
described by the associated rule.
6. But some compiler-compilers can parse larger section of the program before calling
semantic routine.
7. In that case, an internal form of the statements that have been analyzed such as a portion
of the parse tree may be passed to the semantic routine.
8. This latter approach is often used when code optimization is to be performed.
• Compiler-compilers frequently provide special languages, notations, data structure and
other similar facilities that can be used in the writing of semantic routines.
Advantage
1. The main advantage of using a compiler-compiler is very easy of compiler construction
& testing.
2. The object code generated by the compiler may actually be better when a compiler-
compiler is used.
• Because of the automatic construction of scanners and parsers and the special tools
provided for writing semantic routines, the compiler writer is freed from many of the
mechanical details of compiler construction.
• The writer can therefore focus more attention on good code generation & optimization.
MACHINE INDEPENDENT COMPILER FEATURES
The four independent compiler features are,
1. Structured variables
2. Storage allocation
3. Block-structured languages
4. Machine independent code optimization
31
Structured variables
• The compilation of program use structures variables such as arrays, records, strings &
sets.
• We are primarily concerned with the allocation of storage for such variables & with the
generation of code to reference them.
Storage allocation for variables
Single dimensional array declaration
Ex: A: ARRAY[1……10] of INTEGER // Pascal array declaration
• If each INTEGER variable occupies one word of memory then we must clearly allocate
ten words to store the above array.
• If an array is declared as,
B: ARRAY [l….u] of INTEGER
• Then we must allocate u-l+1 words of storage for the array
Multi dimensional array declaration
• Allocation for a multi-dimensional array is not much more difficult
Ex: B: ARRAY [0..3,1..6] OF INTEGER //4 rows , 6 columns
• Here the first subscript can take four different values (0-3) and the second subscript can
take six different values (1-6).
• We need to allocate a total of 4*6 = 24 words to store the array.
• If the declaration is,
ARRAY[l1…..u1, l2…..u2] of INTEGER
• Then the number of words to be allocated is given by,
(u1-l1+1)*(u2-l2+1)
• For an array with n dimensions, the number of words required is product of n such terms.
Methods for storing arrays
• Two methods for storing arrays are,
1. Row-major order
All array elements that have the same value of the 1st subscript are stored in
contiguous locations, this is called row-major order.
Storage of B: ARRAY[0…3, 1….6] IN ROW MAJOR ORDER
0,1 0,2 0,3 0,4 0,5 0,6 1,1 1,2 1,3 1,4 1,5 1,6 2,1 2,2 2,3 2,4 2,5 2,6 3,1 3,2 3,3 3,4 3,5 3,6
In row major order, the rightmost subscript varies most rapidly, in column major
order, the left most subscript varies more rapidly
Referring array element
• To refer to an array element, we must calculate the address of the referenced element
relative to the base address of the array.
Ex: One-dimensional array
A: ARRAY [1…10] OF INTEGER
1. Suppose a statement refers to array element A[6]
2. There are five array elements preceding A[6]
3. On a SIC machine, each such element would occupy 3 bytes
4. Thus the address of A[6] relative to the starting address of the array is given by 5*3=15
32
Code generation for array references
1. If an array reference involves only constant subscripts Ex: A[6], the relative address
calculation can be performed during compilation
2. If the subscripts involve variables Ex: A[i], however the compiler must generate object
code to perform this calculation during execution
Ex: A: ARRAY [l…u] OF INTEGER //array declaration
1. Suppose each array element occupy w bytes of storage
2. If the value of the subscript is S, then the relative address of the referenced array element
A[S] is given by,
W*(s-l)
3. The generation of code to perform such a calculation is illustrated in following figure
Code generation for Array references
A: ARRAY [1….10] OF INTEGERS
.
.
A[J]:=5
1) –I =1, i1
2) *i1=3, i2
3) := =5, A[i2]
4. The notation A[i2] in quadruple 3 specifies that the generated machine code should refer
to A using indexed addressing, after having placed the value of i2 in the index register
Storage allocation
• There are two types of storage allocation
1. Static allocation
2. Dynamic allocation
Static allocation
• Static allocation of memory is carried out during compile time.
• It is often used for languages that do not allow the recursive use of procedures (or)
subroutines and do provide for the dynamic allocation of storage during execution
Problem
• If procedures may be called recursively, static allocation cannot be used.
Ex:
1. In the following figure, the program MAIN has been called by the OS (or) the leader
(invocation 1)
2. The first action taken by MAIN is to store the return address from register at a fixed
location RETADR within MAIN
33
1. In the above figure, MAIN has called the procedure SUB(invocation 2)
2. The return address for this call has been stored at a fixed location within SUB.
1. In the above figure, procedure MAIN has been called, & its activation record appears on
the stack.
2. The base register B has been set to indicate the starting address of this current activation
record.
3. The first word in an activation record contain a pointer PREV, that point to the previous
record on the stack.
4. Here this record is the first, so the pointer value is null.
5. The second word of the activation record contains a ptr NEXT, which will be the starting
address for the next activation record created.
6. The third word contains the return address for this invocation of the procedure, and the
remaining words contain the values of variable used by the programmer.
Invocation of a procedure using automatic storage allocation
35
What happens when procedure returns to its caller?
1. When a procedure returns to its caller, the current activation record ( which corresponds
to the most recent invocation) is deleted.
2. The pointer PREV in the deleted record is used to reestablish the previous activation
record as the current one and execution continues.
Ex: SUB returns from a recursive call
1. In the above figure shows that stack as it was appear after SUB returns from the recursive
call.
2. Register B has been reset to point to the activation record for the previous invocation of
SUB.
Rules for automatic storage allocation
1. When automatic allocation is used, the compiler must generate code for references to
variables using some sort of relative addressing.
2. The compiler must also generate additional code to manage the activation records
themselves.
• At the beginning of each procedure there must be code to create a new activation record
linking it to the previous one and setting the appropriate pointer. This code is often called
a prologue for the procedure.
36
• At the end of the procedure, there must be code to delete the current activation record,
resetting pointers as needed. This code is often called an epilogue.
Other types of dynamic storage allocation
1. In FORTRAN 90, the statement
ALLOCATE (MATRIX (ROWS, COLUMNS))
Allocates storage for a dynamic array, MATRIX with the specified dimensions. The
statement,
DEALLOCATE (MATRIX)
Releases the storage assigned to matrix by previous ALLOCATE
2. In PASCAL, the statement
NEW (P)
Allocates storage for a variable and sets the pointer P to indicate the variables just
created. The statement
DISPOSE (P)
Releases the storage that was previously assigned to the variable pointed to by P.
3. In C, the statement,
MALLOC (SIZE)
Allocates a storage block of size specified, and returns a pointer to it. The function
FREE(P)
Frees the storage indicated by the pointer P, which was returned by a previous MALLOC.
Block structured variables
• In some languages, a program can be divided into units called blocks.
• A block is a pointer of a program that has the ability to declare its own identifier
1. In the above figure, shows the outline of a block-structured program in a PASCAL like
language.
2. Each procedure form a block.
3. In block structured program, blocks may be nested within other blocks. In the above
example, procedures B & D are nested within procedure A, & procedure C is nested
within procedure B.
4. Each block may contain a declaration of variables.
5. A inner block may also refer to variables that are defined in any outer block, but the same
names are not redefined in the inner block.
Compiling & execution of block-structured programs
1. In compiling a program within in a block-structured language, it is convenient to number
the blocks as shown in above figure.
2. The compiler construct a table that describes the block structure as shown below
37
3. The table contains the details of block name, block number, block level and surrounding
block.
4. The block-level entry gives the nesting depth for each block.
5. The outermost block has a level number of 1, and each other block has a level number
that is one greater than that of the surrounding block.
Searching of identifiers in symbol table
• Same name can be declared more than once in a program in different blocks.
• So there can be several symbol-table entries for the same name.
• The entries that represent declarations of the same name by different blocks can be
linked together in the symbol table with a chain of pointers.
• When a reference to an identifier appears in the source program the compiler must first
check the symbol table for a definition of that identifier by the current block.
• Id=f no such definition is found, the compiler looks for a definition by the block that
surrounds the current block, then by the block that surrounds that, and so on.
• If the outermost block is reached without finding a definition of the identifier, then the
reference is an error.
• The search process just described can easily be implemented within a symbol table that
uses hashed addressing.
Access to variables in surrounding block
• One common method for providing access to variables in surrounding block uses a data
structure called a display.
• The display contains pointers to the most recent activation records for the current block
and for all blocks that surround the current one in the source program.
• When a block refers to a variable that is declared in some surrounding block, the
generated object code uses the display to find the activation record that contains this
variable.
• Ex: The use of display is illustrated in the following figure. Here data structure display is
used for pascal procedure that is discussed previously.
1. Assume that procedure A has been invoked by the system, A has then called
procedure B, and B has called procedure C. The resulting situation is shown in
following figure.
38
2. Let us assume procedure C calls itself recursively.
• Another activation record for C is created on the stock as a result of this call.
• The display pointer for C is changed accordingly.
• Variables that correspond to the previous invocation of C are not accessible
for the record.
3. Suppose now that procedure C calls D. The resulting stack & display are shown
below
• An activation record for D has been created the usual way & added to the
stack.
• Note, however, that the display now contains only two pointers : one each to
the activation records for D & A.
• This is because procedure D cannot refer to variables in B (or) C.
• Procedure D can refer only to the variables that are declared by D (or) by
some block that contains D in the source program (in this case, procedure A)
B S1 , S4 S2 /
*
4 I J
• S1 & S4 are common sub expressions. This can be eliminated as shown below:
1. S1:=4*I
2. S2:=I/J
3. S3=:=S1+S2
4. S5:=S4+B
S5+ +S3
B S1 S2 /
*
4 I J
Loop unrolling
• This deals with reducing the number of tests carried out if the number of iteration is
constant.
i=1;
while (i<=100)
{
x[i]=0;
i++;
}
41
“i<=100” is performed 100 times
• This sequence can be replaced by the following set of statements
i=1;
while (i<=100)
{
x[i]=0;
i++;
x[i]=0;
i++;
}
• Replication of body will reduce the number of checking process up to 50%.
Loop jamming
• This is a technique of merging the bodies of two loops if they have the same number of
iterations.
for(i=0;i<=10;i++)
x[i]=0;
for(i=0;i<=10;i++)
y[i]=1;
• Body of two ‘for’ loops having the variable “I” within the same range can be
concatenated.
Result will be
for(i=0;i<=10;i++)
{
x[i]=0;
y[i]=1;
}
Advantages gained by code optimization
1. Codes can be made to run faster.
2. Codes may be made to take less space.
3. Execution efficiency of the object code is achieved.
42
• Example2: The statement
VARIANCE:=SUMS DIV 100 – MEAN * MEAN
could be represented with quadruples
DIV, SUMS, #100, i1
*, MEAN, MEAN, i2
-, i1, i2, i3
:=, i3, , VARIANCE
Code optimization on quadruples
• Many types of analysis and manipulation can be performed on the quadruples for code-
optimization purpose.
• The quadruples can be rearranged to eliminate redundant load and store operations
• And the intermediate results ij can be assigned to registers or to temporary variables to
make their use as efficient as possible.
• After optimization has been performed the modified quadruples are translated into
machine code.
Advantage
• The quadruples appear in the order which the corresponding object code instruction is to
be executed.
• This greatly simplifies the task of analyzing the code for purposes of optimization.
• It also means that the translation into machine instructions will be relatively easy.
43
Solution
1. One way to deal with this problem is to divide the program into basic blocks.
A basic block is a sequence of quadruples with one entry point, which is at the
beginning of the block, one exit point, which is at the end of the block, and no jumps
within the block.
When control passé from one basic block to another, all values currently held in
registers are saved in temporary variables.
An arrow from block x to block y indicates that control can pass directly from the last
quadruple of x to the first quadruple of y. this kind of representation is called as flow
graph.
The value of the intermediate result i1, is calculated first and stored in temporary
variable t1.
Then the value of i2 is calculated.
The third quadruple in this series calls for subtracting the value of i2 from i1.
Since i2 has just been computed, its value is available in register A.
It is necessary to store the value of i2 in another temporary variable t2, and then load
the value of i1 from t1 into register A before performing the subtraction.
44
With a little analysis, an optimizing compiler could recognize this situation and
rearrange the quadruples so the second operand of the subtraction is computed first.
The first two quadruple in the sequence have been interchanged.
The resulting machine code requires two fewer instructions and uses only one
temporary variable instead of two.
45
OPERATING SYSTEM
UNIT III
Introduction
Operating System - It is software that controls hardware. It is comprised of system software,
or the fundamental files needs to boot up and function.
An Operating System (OS) is an interface between computer user and computer hardware. An
operating system is software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices.
Functions of Operating System
Implementing user interface
Sharing hardware among users
Allowing users to sharing data among themselves
Preventing users to take another user data
Scheduling resources among users
Facilitating input and output
Recovering from errors
Accounting for resource usage
Facilitating parallel operations
Handling network communications
Basically the Operating System
Text oriented Operating System
o IMB PC DOS
o Microsoft MS DOS
o UNIX
Graphical oriented Operating System
o Windows
o LINUX
o MAC OS
Definition of Process
The word process is used by the designer the multiples system in the 1960’s
That time process used some interchangeable with task, has been given many definitions
o a program in execution
o an asynchronous activity (not continuous)
o the animated spirit of a procedure
o the locus of control of a procedure in execution
o the dispatchable unit
Process States
A process goes through a series of discrete process states
Various events can cause a process to change states
A process is said to be
o RUNNING - if is currently has the CPU
o READY - if it could use a CPU if one were available
o BLOCKED - if it is waiting for some events to happen (such as input/output
completion event) before it can proceed
Only one process may be running at a time, but several processes may be ready, and several
may be blocked.
Process State Transitions
The operating system can be loaded at the lower application (or) at the higher application
based on the interrupt service routine designed in the hardware designing.
The Working Principle is,
ALL the ready process is held on the disk in the order of priority.
At any time only one process runs in the main memory.
When this process is blocked, it is swapped out from the main memory to disk.
The next highest priority process is swapped in the main memory and starts its execution.
The problem of relocation and translation exists only in the starting physical address of the
program.
Overlay Structure
Programs are limited in size to the amount of main storage, but it is possible to run program
larger than the main storage by using overlays.
Manual overlay requires careful and time-consuming planning.
A program with a sophisticated overlay can be difficult to modify.
In manual overlay changes can be easily accommodate.
In sophisticated overlay it leads to time consuming.
An Input / Output request is issued, the job cannot continue until the request data is either
send or-receive.
Input / Output speed are slow compared with CPU speed.
This wastage of CPU utilization is overcome by Multi Programming.
Multiprogramming requires more storage that is single user system.
In multiprogramming several users simultaneous access the system resources.
This increases the CPU utilization and system Throughput.
Throughput - no. of processes completed per unit time.
Main storage was divided into a number of fixed size partitions.
Each partition holds a single job and it was allocated for several jobs. If one job meets Input /
Output operations that time another job is associated in CPU.
Translation and Loading
Jobs were translated with absolute assembler and compilers to run only in a specific partition.
If a job was ready to run and its partition was occupied, then the job had to wait, even if other
partitions were available.
This resulted in waste of the storage resource.
Jobs waiting for partition 3 can be split into other partition.
But absolute translation loading these jobs run only in partition - 3.
The other two partitions remain empty.
Relocatable Translation and Loading
Relocating compilers, assemblers and loaders are used to produce relocatable programs that
can run in any available partition (i.e.) large among to hold them.
Protection
Protection is implemented with several boundary registers with two registers the low & high
boundaries of a user partition is indicated.
While the user in partition 2 all storage addresses developed by the running program are
checked to be super vision call, they fall between b & c.
A supervisor call instruction allows the user to cross the boundary of the Operating System
and request its services.
Fragmentation
It may occur in 2 ways
User jobs do not completely fill their designer partition.
A partition remains unused, if it is too small to hold a waiting job.
Variable Partition Multi Programming
No fixed boundaries jobs can have as main storage as they require.
There is no wastage of memory inside the partition because each partition is allocated
according to the size.
Initial Partition Allocation
Storage Holes
The wastage of memory occurs when a job completes its process and release the storage
partition particular is named as hole in the main storage area.
These holes can be used for other jobs when assign another job to the particular hole it may
lead to small amount of wastage memory when the job size is smaller than partition size
multiprogramming.
Coalescing Holes
The process of merging adjacent hole to form a single larger hole is called coalescing holes.
So that we can reclaim the largest possible contiguous blocks of storage.
Storage Compaction
Moving all occupied areas are storage to one end or other of main storage.
This leaves the single largest storage hole instead of the numerous small holes.
This also referred as barbing the storage or garbage collection.
Drawbacks
Consume system resources.
System must stop while it performs compaction which results in erratic response time.
Relocating jobs often need to change address location of jobs.
UNIT IV
Virtual Storage Management
Physical memory is limited, and then there is problem to execute an entire process in main
memory.
This is overcome by virtual memory system.
In this scheme we can keep only a part of the process image in the memory and the other part
of the disk and it is easy to execute the process.
Virtual Storage Management Strategies
1. Fetch Strategies
To read a paging and segmentation from main storage to secondary storage.
o Demand Fetch - paging in fetch only when running program request.
o Anticipatory Fetch - assign the next paging segmentation to improve system
performance.
2. Placement Strategies
Locating the incoming paging in the main memory based on various allocating techniques
such as First fit, Best fit and Worst fit.
3. Replacement Strategies
Besides which paging or segmental transfer for incoming paging.
Page Replacement Strategies
If a page is necessary chosen from the process, it is called ‘Local Replacement Policy’.
If it can be from any outside process, it is called ‘Global Replacement Policy’.
The Page Replacement Strategies are
The Principle of Optimality (OPT)
First In First Out (FIFO)
Second Chance (SC)
Least Recently Used (LRU)
LRU Approximation
o Not Used Recently (NUR)
o Least Frequently Used (LFU)
Random Page Replacement (RAND)
Clock
Working Set
Page Fault Frequency Page Replacement (PFF)
1. The Principle of Optimality (OPT)
OPT removes a page that will be used not immediately but in the most distant future.
Let us we assume that there are only three page frame 0, 1 and 2 assume that the reference
string is,
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 3
Page Frame 1 1 1 1 1 1 1 5
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss
This shows the state of three page frame after each page reference.
This column shows three frames with page that it will contain after the sequence.
The fourth page reference in for page 3.
This is not in the memory, but no page frame is also free.
So that operating system has to choose which one (0, 1 or 2) is to be removed.
Now according to the reference string page 8 & 2 are not used in future and page 1 will be
used in future.
So the Operating System replaces the page 8 with page 3.
This is an OPT example for Belady’s algorithm for OPT page replacement.
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 4 4 4
Page Frame 2 2 2 2 2 1 1
Hit or Miss Miss Miss Miss Miss Hit Miss Miss Miss
Consider the page reference string, the fourth reference is page 3 will replace page 8 because
it comes first and then next page does not cause page fault.
The sixth page reference is page 4 will replace page 1 because it comes earlier.
This algorithm can be implemented using ‘FIFO’ with a pointer chain, where the header is the
page that came in first, and the end of the chain is the page that comes in last.
3. Second Chance (SC)
It tries to replace the pages which are not referenced more often.
The system maintains a set of reference b one for each page frame.
This reference bit is initially 0, then sets it to 1 as soon as the corresponding page frame is
referenced.
Reference bit = 0 means that the page has not been reference and it can be replaced.
Reference bit = 1 means that the corresponding page reference bit is sets as 0 and treated as
new arrival.
This algorithm gives one more chance in FIFO queue page reference.
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss
5. LRU Approximation
Each page associate with a bit called Reference bit, initially = 0.
When the page is referenced, Reference bit set to 1.
Replace the one which is 0 (if one exists). We do not know the order.
There are two methods in it
o Not Used Recently (NUR)
o Least Frequently Used (LFU)
For each page frame, 2 bits are maintained when a page is referenced, the between sets R-bit
for page frame 1. When it is modified the M-bit for that page frame to 1.
At the particular clock interval all the pages R to 0 to differentiate between the latest
reference from the earlier once.
This algorithm removes a page at random from the lowest numbered non empty class.
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 3 3 3 3 5
Page Frame 1 1 1 1 1 1 1 1
Page Frame 2 2 2 2 4 4 4
Hit or Miss Miss Miss Miss Miss Hit Miss Hit Miss
Page number with Counter value:
8(1) -> 8(1), 1(1) -> 8(1), 1(1), 2(1) -> 1(1), 2(1), 3(1) -> 2(1), 3(1), 1(2) -> 3(1), 1(2), 4(1) ->
3(1), 4(1), 1(3) -> 4(1), 1(3), 5(1)
Frame / Page 8 1 2 3 1 4 1 5
Page Frame 0 8 8 8 8 1 1 1 1
Page Frame 1 1 1 3 3 4 4 4
Page Frame 2 2 2 2 2 2 5
Hit or Miss Miss Miss Miss Miss Miss Miss Hit Miss
9. Clock
Maintain a circular list of pages resident in memory.
Use a clock (or used / referenced) bit to track how often a page is accessed.
The bit is set whenever a page is referenced.
Clock hand sweeps over pages looking for one with used bit = 0.
If found then replaces the current page.
Else reset the used bit and advance the clock pointer.
10. Working Set Model
Trashing
o A process is thrashing if it is spending more time for paging in / out due to frequent
page faults than executing.
o the rate at which page fault occur, if the page fault occur every time after only a few
instructions, the system is said to be thrashing.
Working Sets
o Working Sets is collection of pages the process is actively referencing, working set of
pages must be maintained in primary storage.
Working set storage management policy
o To maintain the working sets of active programs in primary storage.
o The decision to add the new process to the active set of processes is based on whether
the sufficient space is available in primary storage.
Drawback
o It is not possible to know in advance large a given process set will be one definition
of working set of pages.
The value t corresponds to the current process time. The value w is the process’s working set
window size.
The process’s working set of pages W (t, w) is defined as the set of pages referenced by the
process during time interval t - w to t.
Page Size
The smaller page size cause larger page tables and also the waste of storage due to
excessively large table is said to be table fragmentation.
INPUT/OUTPUT transfers are more efficient with large pages.
Larger page sizes may not be referenced are paged into primary storage.
But to minimize the INPUT/OUTPUT transfer, we need large page sizes.
The smaller page size leads to less internal fragmentation.
Paging
The partition within the virtual storage divided into same size or fixed size then it is known as
Paging.
When a process is running currently that page is presented in primary storage.
When that page is transferred from secondary to primary storage within the block is called as
Page Frame.
A running process reference a virtual storage addresses as V (p, d), where p indicates the page
number and d indicated displacement value.
Segmentation
Segmentation and Paging schemes share a lot of common principles of operations except
pages are physical divisions of fixed size in memory, where as segments are logical (or)
virtual division of a program in variable size.
Each program has the following divided into segments.
Each program has the following major segments
o Code
o Data
o Stack
Each of these can be further divided into segments.
Each segment is compiled with respect to as the starting address for the segment.
The user program is compiled and the compiler automatically constructs segment reference
by the input program.
A running process reference a virtual storage addresses as V (s, d), where s indicates the
segment number and d indicated displacement value.
Comparison between Segmentation and Paging
Segmentation Paging
Given memory is divided into number of Given memory area divided into number of
segments of different size. segments of equal size.
Processor Management
Job and Processor Scheduling
The problems of determining word processes should be assigned and to which processor is
called processor scheduling.
Scheduling Levels
1. High Level
It is also called as job scheduling or long term scheduling
It determines which jobs shall be allowed to utilize the resources of the system.
It is also referred as admission scheduling.
2. Intermediate Level
Determines which jobs shall be allowed to utilize the CPU. It response to suspend active
processor.
It acts as a buffer between the admission of job to the system and the assigning of the CPU to
these jobs.
3. Low Level
It is also known as dispatcher and dispatches the CPU to the process.
This dispatcher operates many times per second.
It often assigns a priority to each process.
Scheduling Objectives
Be Fair - all the processes are treated the same and no deadlock.
Maximize Throughput - to service the largest possible number of processes per unit time.
Maximize Users - receiving acceptable response time.
Be Predictable - a given job should complete based on time and cost.
Maximize Overhead
o This will improve overall system performance.
o Achieve a balance b/w response and utilization.
o Enforce priority.
o Avoid indefinite postponement.
o Degrade gracefully under heavy loads.
Preemptive
Once the process has given to the CPU, the CPU can take away from that process.
This is useful in system in which high priority process requires rapid attention.
Interactive Time Sharing System guarantees acceptable response times, change at any time.
Preemption is not without cost because many processes must be kept in main storage.
Non Preemptive
Once the process has been given to CPU, the CPU cannot be taken away from that process.
All the processes have equal priority.
Short jobs are made to wait by longer jobs.
Response times are more predictable.
Priorities
Priorities may be assigned automatically by system or they may be assigned externally.
Static
o They do not change the mechanism are easily implement and have relatively low
overhead.
o They are not responsive to change in environment.
Dynamic
o The mechanisms are responsive to change the initial priority may have only a short
duration.
o These are more complex to implement and have greater overhead.
Earned (or) Bought (or) Purchase Priority
o This is provided for a member of the community need special treatment.
o A user with a rush job may be willing to pay a premium (i.e.) purchase priority for a
high level of service.
o If there were no extra charge, then all users would request the higher level of service.
Rationally Assigned (or) Arbitrarily Assigned
o Which a system mechanism needs.
A process has less chance of completion A process has high chance of completion
Main storage got wasted because of storing Main storage not wasted because of storing
many jobs currently running job
Consumes high time because many jobs are Consumes less time because one job is
involved involved
Deadline Scheduling
The simplest scheduling algorithm is First Come First Serve (FCFS). Jobs are scheduled in
the order they are received.
Calculate the turnaround time, waiting time, average turnaround time, average waiting time,
throughput and processor utilization for the given set of processes that arrive at a given arrive time.
Process Arrival Time Processing Time
P1 0 3
P2 2 3
P3 3 1
P4 5 4
P5 8 2
If the processes arrive as per the arrival time,
This algorithm is assigned to the process that has smallest next CPU processing time, when
the CPU is available. In case of a tie, FCFS scheduling algorithm can be used.
As an example, consider the following set of processes with the following processing time
which arrived at the same time.
Process Processing Time
P1 06
P2 08
P3 07
P4 03
Using SJF scheduling because the shortest length of process will first get execution,
Each process is allocated a small time-slice called quantum. No process can run for more
than one quantum while others are waiting in the ready queue. If a process needs more CPU time to
complete after exhausting one quantum, it goes to the end of ready queue to await the next
allocation.
Consider the following set of process with the processing time given in milliseconds with the
Quantum of 4 milliseconds.
Process Processing Time
P1 24
P2 03
P3 03
Process Processing Time Turnaround Time Waiting Time
P1 24 30 – 0 = 30 30 – 24 = 6
P2 03 7–0=7 7–3=4
P3 03 10 – 0 =10 10 – 3 = 7
This permits a process that enters the ready list to preempt the running process if the time for
the new process (or for its next burst) is less than the remaining time for the running process (or for
its current burst).
Consider the set of four processes arrived as per timings described in the table:
Process Arrival time Processing time
P1 0 5
P2 1 2
P3 2 5
P4 3 3
Equal priority processes are scheduled FCFS. The level of priority may be determined on the
basis of resource requirements, processes characteristics and its run time behaviour.
As an example, consider the following set of five processes, assumed to have arrived at the
same time with the length of processor timing in milliseconds: –
Process Processing Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Where W = Waiting time of the process so far and B = Burst time or Service time of the
process.
Consider the set of 5 processes whose arrival time and burst time are given below
Process Processing Time Priority
P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2
At t = 0, only the process P0 is available in the ready queue.
So the portion on schematic of a moving hard disk which the data is to be written must rotate
until it is immediately below (or) above the read/write head
The time is takes for data to rotate from its current position to a position adjacent to the
read-write head is called “rotational latency-time”
Actuator or Boom or Moving Arm Assembly
All read-write heads are attached to a single boom or moving arm assembly or actuator.
When the boom moves the read-write heads to a new position, a different set of circular tracks
become accessible.
Cylinder - for a particular position of the boom, the number of tracks sketched out by all the
read-write heads from a vertical cylinder.
Seek Operation - the process of moving the boom to a new cylinder.
Latency - current position to new position.
Transmission Time - the record which is random size, must made to spin by the read-write
head. The total time taken to access a particular record is fraction of second.
3. SCAN Scheduling
Disk arm sweeps back & forth across the disk surface, servicing all requests in its path.
It changes direction only when there are no more requests to service in the current direction.
The SCAN scheduling strategy to overcome the high variance in response time of SSTF.
SCAN operates like SSTF except that it chooses the request that results in the shortest seek
distance in a preferred direction. It is sometimes called the “elevator algorithm”.
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The head is initially at cylinder number 53 moving towards larger cylinder numbers
on its servicing pass.
4. C - SCAN Scheduling
Disk arm moves single directionally across disk surface forward the inner track.
When there are no more requests for service its jumps back to service the request near outer
track and proceeds inward again.
The arm moves from the outer cylinder to the inner cylinder servicing requests on a shortest
seek basis.
When the arm has completed its inward sweep it jumps to the request nearest the outermost
cylinder and then resumes its inward sweep processing request.
It has a very small variance in response time.
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The head is initially at cylinder number 53 moving towards larger cylinder numbers
on its servicing pass.
5. N - Step SCAN
Disk arm sweeps back and forth as in SCAN but all request that arrive during a swap in the
same direction are batched and reordered for optimal service during the return sweep.
On each sweep, the first N requests are serviced.
N - Step SCAN offers good performance in throughput and mean response time.
It avoids the possibility of indefinites postponement occurring if a large number of requests
arrive for the current cylinder.
New requests are saved in a queue and servicing on the return sweep.
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 122, 14, 124, 65,
67 and 41 (is arrived when processing 122).
File and Database Systems
Introduction
A file is a collection of data. It record on a secondary storage device such as disk or floppy.
It may be manipulated as a unit by operation such as,
o Open - prepare a file to be referenced.
o Close - prevent further reference to a file until it is reopened.
o Create - built a new file.
o Destroy - remove a file.
o Copy - create another version of the file with new name.
o Rename - change the name of a file.
Individual data items within the file may be manipulated by operation like,
o Read - input a data item to process from a file.
o Write - output a data item from a process to a file.
o Update - modify an existing data item in a file.
o Insert - hold a new data item to a file.
o Delete - remove a data item from a file.
o List – print or display the contents of a file.
Files may be characterised by,
o Volatility - the frequency with which additions and deletions are made to a file.
o Activity - the percentage of a file records accessed during a given period of time.
o Size - this refers to the amount of information stored in the file.
o Location - the location of the file.
o Accessibility – restrictions placed on access to file data.
o Type - how the file data is used.
File Systems Components are,
o Access Methods - which data stored in file is accessed.
o File Management - file to be stored, retrieve sharing and secure.
o Auxiliary Storage Management - allocating space for file on secondary storage.
o File Integrity Mechanism - the information in a file is uncorrupted.
Functions
Users should be able to create, modify and delete files.
Sharing of files should be taken carefully.
Sharing has controlled access as like read, access, write access, execute or various
combination these.
Structure of file for each application.
User should be able to order the transferred information between files.
Backup and recovery.
Secure and private.
Using symbolic names.
Provide user friendly interface.
User should not concern with the particular, devices on which data is stored.
File Organization
This refers to the records of a file are arranged on secondary storage. The various schemes are
Sequential
o The records are placed in sequential physical order. The next record follows previous
one. The devices magnetic tape, disk files are arranged in sequential order.
Direct
o Records are directly accessed by their physical address on a direct access storage
device. Hashing techniques are used to locate direct access files user can place the
records in any order.
Indexed Sequential
o Records are arranged in logical sequence according to a key content in each record. It
can be access either sequential (or) direct the use of indexed keys.
Partition
o The file contains sequential of some files each sub-file is called a member. The
starting address of each file is stored in files directory. These are often used to store
program libraries or macro libraries.
Allocating and Freeing Space
When the files are executed, the needed space is allocated and after the completion execution
that particular part of memory space is free in primary storage allocation.
This will lead to a problem of fragmentation which can be avoided by performing periodic
compaction or garbage collection.
Files may be reorganized to occupy adjacent areas of the disk and free area may be collected
into a single block or group of large blocks.
Some system performs compaction dynamically.
This may not useful for a single system with 100’s of users because long seeks may need for
the system switches between processors.
Contiguous Allocation
o Files are assigned to contiguous areas of secondary storage.
o A user specifies in advance the size of area needed to hold a file should be created.
o If the size is not available then the file cannot be created.
Advantages
o Successive logical records are physical adjacent to one another.
o So it speeds access compare to success logical records are disposed throughout the
disk.
o File directories are straight forward implement.
Disadvantages
o As the files are deleted the space may not fit for new files.
o Adjacent storage holes must be combined.
o Periodic compaction may need.
Non Contiguous Allocation
o Sector oriented linked allocation
o Block allocation
Sector Oriented
o A disk is viewed as individual sectors belonging to common files contain pointer to
one another forming a linked list.
o A free space list contains entries for all from sectors on the disk.
o No need for compaction.
o To overcome the problem of allocation is takes long seeks to retrieval of logically
contiguous records.
o Pointer in the list structure reduces the amount of space available for file data.
Block Allocation
o In this scheme instead of allocating individual sectors, blocks of contiguous sectors
are allocated.
o Each access to the file involves finding the appropriate block and the corresponding
sector between the block.
o There are three ways to implement the block allocation
i. Block Chaining
ii. Index Block Chaining
iii. File Oriented File Mapping
1. Block Chaining
o A block contains data block and pointer to the next block.
o Locating a particular record requires the block until the appropriate record found.
o But the chain must be searched from the beginning through a particular record found.
2. Index Block Chaining
o Pointers are placed into separate index blocks. Each entry contains a record, index
and a pointer that record.
o If more than one index block is needed to describe a file a series of index blocks are
chained together.
Advantage
o Searching may take place in the index block themselves.
o Seek time is reduced.
o Index block get close together in secondary storage to minimize seeking.
o It rapid searching is needed the index block can be maintained in primary storage.
Disadvantage
o Insertion can require the complete reconstruction of the index blocks, so some system
leave a certain portion of index blocks empty to provide features insertions.
3. File Oriented File Mapping
o Instead of using pointer the system uses in the block numbers.
o Each entry in the file map contains the block number of the next block in that file.
o Nil indicates that the last block of a file been reached.
o Free indicates that the block is available in the allocation.
File Descriptor
It is also called as file control block.
This contains information about the file regarding system needs.
It has descriptions like,
o Symbolic file name
o Location
o File organization
o Device type
o Access control data
o Type (data file, obj pgm, source pgm, etc)
o Disposition
o Creation data & time
o Destroy date
o Date and time last modified
o Access activity counts
File descriptors are maintained on secondary storage.
It is controlled by operating system.
The user may not reference it directly.
Access Control Matrix
A 2D matrix is used to list the entire user and all the files in the system.
Based on their access control matrix, if Aij = 1 user i, is allowed for accessing the file j, else
Aij = 0.
This matrix is known as Access Control Matrix.
2
5 Marks
8 Marks
4
24. The compiler was driven by the ____________ process.
a. Translating b. Parsing c. Asymmetric d. Symmetric
25. A ____________ was invoked as each language construct was recognized by the
parser.
a. Code generation routine b. Code adopting routine
c. Code reading routine d. Code addressing routine
26. ____________ processes a source program written in a high-level language just as
the compiler does.
a. Translator b. Assembler c. Interpreter d. Processor
27. An interpreter usually performs ____________ analysis functions.
a. Lexical & Syntactic b. Lexical & Symmetric
c. Syntactic & Symmetric d. Lexical & Semantic
28. P-code compiler is also called ____________ compilers.
a. Port code b. Parse code c. Byte code d. Simple code
29. A ____________ can be used without modification on a wide variety of system.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
30. A ____________ is software told that can be used to help in the task of compiler
construction.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
31. In some languages a program can be divided into units called ____________
a. Statements b. Mnemonics c. Quadruples d. Blocks
32. Lexical analyser is also known as ____________
a. Scanner b. Parser c. Code generator d. Code optimizer
33. Syntax analyser is also known as ____________
a. Scanner b. Parser c. Code generator d. Code optimizer
34. ____________ executed in very few times.
a. One pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
35. ____________ designed for single user micro computer system.
a. Single pass compiler b. Interpreter
c. P-code compiler d. Compilers-compilers
36. Compiler-compiler also known as ____________
a. Compiler unit b. Translator writing system
c. Compiler generation d. Both b and c
37. The READ and WRITE statements are represented with a ___________ operation.
a. Call b. Convert c. Design d. Execute
5 Marks
8 Marks
1. The term ___________ was first used by the designers of the Multics system in the
1960s.
a. Process b. Processor c. Operation d. Process States
2. Process is ___________
a. Program in execution b. Animated spirit of a procedure
c. Dispatchable unit d. All the above
3. A process should be divided into ___________ states.
a. Ready, Running & Blocked b. Create, Running & Blocked
c. Create, Ready & Running d. Ready, Running & Stopped
4. The ___________ sets a hardware interrupting clock or interval time to allow the
user to utilize the Processor.
a. Operating System b. System Module c. A Program d. A Switch
5. The manifestation of a process in an operating system is a ___________
a. Process Description b. Process Control Block (PCB)
c. Both a & b d. Process Matrix
6
6. The user to run for a specific time interval is also known as ___________
a. Time Slice b. Quantum c. Both a & b d. Random Time
7. The assignment of the CPU to the first process on the ready list is called
___________
a. Transferring b. Locating c. Dispatching d. Linking
8. ___________ is an event that alters the sequence in which a processor executes
instructions.
a. Interrupt b. Dispatcher c. Scheduler d. Monitor
9. An interrupt may be specifically initiated by running process called ___________
a. Trap b. Synchronous c. Both a & b d. Asynchronous
10. An interrupt may or may not be related to the running process called ___________
a. Asynchronous b. Synchronous c. Asymmetric d. Symmetric
11. SVC is stands for ___________
a. Supervisor Call Interrupt b. System Visit Call Interrupt
c. System Vision Call Interrupt d. System Visual Call Interrupt
12. ___________ instruction arrives from another processor on a multi processor
system will restart the system.
a. Signal Processor (SIGP) b. Start Processor
c. Stop Processor d. Restart Processor
13. ___________ caused by wide range of problems that may occur as a program’s
machine language instructions is executed.
a. Program Check Interrupt b. Machine Check Interrupt
c. Input Output Interrupt d. External Interrupt
14. The same amount of space divides main storage into portions called ___________
a. Storage Organization b. Partition c. Storage Management d. Segments
15. ___________ caused by malfunctioning hardware.
a. Program Check Interrupt b. Machine Check Interrupt
c. Input Output Interrupt d. External Interrupt
16. ___________ is defines the manner in which the main storage is viewed.
a. Storage Organization b. Partition c. Storage Management d. Processor
17. ___________ are concerned to obtain the next piece of program from main storage
to secondary storage.
a. Fetch b. Placement c. Replacement d. All the three
18. ___________ data is brought in to the main storage is referenced by a running
program.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
19. ___________ Strategies are concerned with determining in main storage to place
incoming programs.
a. Fetch b. Placement c. Replacement d. All the three
20. Today many researchers feel that ___________ will yield improved system
performance.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
21. ___________ program had to occupy a single contiguous block of storage location.
a. Contiguous storage allocation b. Non-contiguous storage allocation
c. Overlay storage allocation d. All the three
7
22. A program is divided into several blocks is called ___________
a. Segments b. Paging c. Partitions d. Blocks
23. Virtual storage system has obviated the need for programmer-controller
___________
a. Segments b. Paging c. Partitions d. Overlays
24. ___________ occurs in every computer system regardless of its storage
organizations.
a. Storage fragmentation b. Storage Overlay
c. Storage compaction d. None of the above
5 Marks
8 Marks
1. The page or segment should be brought from secondary to primary storage is called
___________ Strategy
a. Fetch b. Placement c. Replacement d. All the three
2. ___________ Strategy waits for a process to reference a page or segment before
loading it.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
3. ___________ Strategy attempt to determine what pages will be referenced for
process.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
4. ___________ Strategy determine where in primary storage to place an incoming
page or segment.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
8
5. Few placement strategies are ___________
a. First, Best, Worst & Buddy b. First, Best, Worst & Last
c. First, Second, Best & Worst d. First, Second, Best & Worst
6. The page or segment to remove from main memory to make more space is
___________ Strategy.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
7. Which of the following is/are replacement strategies ___________
a. First In First Out & Clock b. Second Chance & Random
c. Least Recently Used d. All of the above
8. Incoming page are placed in any available page frame so ___________ system are
trivializing the placement decision.
a. Paging b. Fragmentation c. Compaction d. Replacement
9. The process of which page in primary storage to displace (or) remove to make room
(or) space for an incoming page is ___________ Strategy.
a. Demand Fetch b. Placement c. Replacement d. Anticipatory Fetch
10. The page to replace is the one that will not be used again for the furthest tie into the
future is called ___________
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
11. The principle of optimality is called ___________
a. OPT or MIN b. Second Chance c. Trivial d. Non Trivial
12. ___________ Replacement selects any page or random page for replacement.
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
13. The random page replacement is rarely used ___________ approach.
a. Hit b. Miss c. Hit or Miss d. Static
14. ___________, choose the page that has been in storage the longest?
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
15. ___________ strategy is placed at the fail of the queue and pages are replaced from
the head of the queue.
a. First In First Out b. Second Chance
c. Random Page d. The Principle of Optimality
16. ___________ Strategy selects that page for replacement that has not been used for
the longest time.
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
17. ___________ Strategy the cache block is removed whenever the cache is
overflowed?
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
18. In a referenced bit in LRU, the page has not been referenced s denoted as ________
a. Zero b. One c. Asterisk d. Hyphen
19. In a referenced bit in LRU, the page has been referenced is denoted as ___________
a. Zero b. One c. Asterisk d. Hyphen
9
20. The page to be replaced in least frequently used is called ___________
a. First In First Out b. Second Chance
c. Least Recently Used d. The Principle of Optimality
21. In a modified bit in LRU, the page has not been modified is denoted as _________
a. Zero b. One c. Asterisk d. Hyphen
22. In a modified bit in LRU, the page has been modified is denoted as ___________
a. Zero b. One c. Asterisk d. Hyphen
23. The modified bit in LRU, is often called the ___________
a. Dirty bit b. Refer bit c. Later bit d. Update bit
24. The CPU cannot be taken away from that process then the scheduling discipline is
called ___________
a. Preemptive b. Non Preemptive c. Static d. Dynamic
25. The CPU can be taken away from that process, then the scheduling discipline is
called ___________
a. Preemptive b. Non Preemptive c. Static d. Dynamic
26. ___________ does not change.
a. Preemptive b. Non Preemptive c. Static Priority d. Dynamic Priority
27. ___________ mechanisms are responsive to change.
a. Preemptive b. Non Preemptive c. Static Priority d. Dynamic Priority
28. A user with a rush job may be willing to pay a premium; the priority is called
___________
a. Purchased b. Assigned Priority c. Static Priority d. Dynamic Priority
29. Certain jobs are scheduled to be completed by a specific time is called
___________ Scheduling.
a. Static b. Dynamic c. Deadline d. Timing
30. The waste of storage due to excessively large tables is called ___________
a. Paging b. Segmentation c. Fragmentation d. None of the above
31. Real storage is normally divided into fixed-size page frames called ___________
system.
a. Paging b. Segmentation c. Fragmentation d. None of the above
32. Reducing the ___________ of a process's page waits is an important goal of storage
management strategies.
a. Space Time Product b. Space c. Time d. None of the three
33. The process of loading the page into memory is known as ___________
a. Demand Paging b. Segmentation c. Fragmentation d. Thrashing
34. The path of execution a program will take cannot be accurately predicted called
___________ Problem.
a. Exiting b. Halting c. Execution d. Path
35. The time between page faults, called the ___________
a. Inter fault time b. Inter page time c. Inter fetch time d. All the three
36. A view of program paging activity called working set theory of program behaviour
developed by ___________
a. Denning b. Ritchie c. Henry d. Bench
37. The program repeatedly requests pages from secondary storage are called ________
a. Demand Paging b. Segmentation c. Fragmentation d. Thrashing
10
5 Marks
8 Marks
12
22. A ___________ contains entries for all free sectors the disk.
a. Free space list b. Used space list c. Space list d. None of the above
23. One way to control access to file is to create a two-dimensional ___________
a. Process Description b. Process Control Block (PCB)
c. File Descriptor d. Access Control Matrix
5 Marks
8 Marks
13