21ai404 Os Unit I
21ai404 Os Unit I
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
21AI404
OPERATING SYSTEM
FUNDAMENTALS
(LAB INTEGRATED)
UNIT I
Department : Artificial Intelligence &
Data Science
Batch/Year : 2021 - 2025 /II
Created by :Dr.G.Sangeetha
Dr. Josephin Shermila
Date : 18.01.2023
Table of Contents
S
CONTENTS PAGE NO
NO
1 Contents 1
2 Course Objectives 6
8
3 Pre Requisites (Course Names with Code)
4 Syllabus (With Subject Code, Name, LTPC 10
details)
5 Course Outcomes 14
7 Lecture Plan 18
9 Lecture Notes 22
10 Assignments 95
11 Part A (Q & A) 97
12 Part B Qs 104
COURSE OBJECTIVES
To explain the basic concepts of operating systems and process.
To discuss threads and implement various CPU scheduling algorithms.
To describe the concept of process synchronization and implement deadlocks
To analyse various page replacement schemes.
To investigate disk scheduling algorithms
Prerequisite
21AI404 OPERATING SYSTEM FUNDAMENTALS
(LAB INTEGRATED)
PREREQUISITE
Digital Principles and Computer Architecture
Data Structures
Syllabus
21AI404 - OPERATING SYSTEM FUNDAMENTALS
(LAB INTEGRATED)
SYLLABUS 3003
LIST OF EXPERIMENTS:
1. Basic Unix file system commands such as ls, cd, mkdir, rmdir, cp, rm, mv, more,
lpr, man, grep, sed, etc.
2. Shell Programming
3. Programs for Unix System Calls.
a. Write a program to fetch the below information; Name of the operating system,
Current release level, Current version level, Total usable main memory size,
Available memory size, Amount of shared memory, Memory used by buffers,
Total swap space size, and Swap space still available.
b. Use system calls to imitate the action of UNIX command "ls" with option -a, and
-li command
a. Use system calls to imitate the action of UNIX command "cp" or "dir" with a
couple of options
b. Implement process life cycle: Use the system calls fork(), exec(), wait(),
waitpid(), exit(0), abort() and kill().
4. Write a program to implement the following actions using pthreads
a)Create a thread in a program and called Parent thread, this parent thread creates
another thread (Child thread) to print out the numbers from 1 to 20. The Parent
thread waits till the child thread finishes
b)Create a thread in the main program, this program passes the 'count' as an
argument to that thread function and this created thread function has to print your
name 'count' times
5.Process Synchronization using Semaphores. A shared data has to be accessed by two
categories of processes namely A and B. Satisfy the following constraints to access the data
without any data loss.
(i)When a process A1 is accessing the database another process of the same
category is permitted.
(ii)When a process B1 is accessing the database neither process A1 nor another
process B2 is permitted.
(iii)When a process A1 is accessing the database process B1 should not be allowed
to access the database.
Write appropriate code for both A and B satisfying all the above constraints using semaphores.
Note: The time-stamp for accessing is approximately 10 sec
6. Implementation of IPC using Shared memory
a. Write a UNIX system call program to implement the following shared memory
concept
(i) In process 1 - Creation a shared memory of size 5 bytes with read/write permission
and enter balance amount of Rs 1000.
21AI404 - OPERATING SYSTEM FUNDAMENTALS
(LAB INTEGRATED)
ii) In process 2 – Add Rs. 200 to your balance. During this modification maintain the
atomicity of shared memory using binary semaphore
iii) In process 3 – Subtract Rs. 800 to your balance. During this also modification
maintain the atomicity of shared memory using binary semaphore
a) Get the input data (integer value) from a process called sender
b) Use Message Queue to transfer this data from sender to receiver process
c) The receiver does the prime number checking on the received data
d) Communicate the verified/status result from receiver to sender process, this status
should be displayed in the Sender process.
PO’s/PSO’s
COs
PO PO PO PO PO PO PO PO PO PO PO PO PSO PSO PSO
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO1
3 2 2 2 2 - - - 2 - - 2 2 - -
CO2
3 3 2 2 2 - - - 2 - - 2 2 - -
CO3
2 2 1 1 1 - - - 1 - - 1 2 - -
CO4
3 3 1 1 1 - - - 1 - - 1 2 - -
CO5
3 3 1 1 1 - - - 1 - - 1 3 1 -
Mode
No of Actual Taxo
S No Topics Proposed Pertainin of
periods Lecture nomy
date g CO delivery
Date level
2. Computer-System Organization
A modern general-purpose computer system consists of one or more CPUs and a
number of device controllers connected through a common bus that provides access
between components and shared memory (Figure 1.2).
The device controller is responsible for moving the data between the peripheral
devices that it controls and its local buffer storage. Typically, operating systems have a
device driver for each device controller. This device driver understands the device
controller and provides the rest of the operating system with a uniform interface to the
device. The CPU and the device controllers can execute in parallel, competing for
memory cycles. To ensure orderly access to the shared memory, a memory controller
synchronizes access to the memory.
1. Interrupts
Consider a typical computer operation: a program performing I/O. To start
anvI/O operation, the device driver loads the appropriate registers in the device
controller. The device controller, in turn, examines the contents of these registers to
determine what action to take (such as “read a character from the keyboard”).
Figure 1.2 A typical PC computer system
The controller starts the transfer of data from the device to its local buffer. Once the
transfer of data is complete, the device controller informs the device driver that it has
finished its operation. The device driver then gives control to other parts of the
operating system, possibly returning the data or a pointer to the data if the operation
was a read. For other operations, the device driver returns status information such as
“write completed successfully” or “device busy”. But how does the controller inform the
device driver that it has finished its operation? This is accomplished via an interrupt.
1.2.1.1 Overview
• Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually
by way of the system bus. Interrupts are used for many other purposes as well and
are a key part of how operating systems and hardware interact.
• When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location.
• The fixed location usually contains the starting address where the service routine for
the interrupt is located.
• The interrupt service routine executes; on completion, the CPU resumes the
interrupted computation. A timeline of this operation is shown in Figure 1.3.
Interrupts are an important part of a computer architecture. Each computer design
has its own interrupt mechanism, but several functions are common. The interrupt
must transfer control to the appropriate interrupt service routine.
The interrupt vector is the array of addresses of the interrupt service routines for the
various devices.
This interrupt vector is then indexed by a unique number, given with the interrupt
request, to provide the address of the interrupt service routine for the interrupting
device.
The interrupt architecture must also save the state information of whatever was
interrupted, so that it can restore this information after servicing the interrupt. If the
interrupt routine needs to modify the processor state— for instance, by modifying
register values—it must explicitly save the current state and then restore that state
before returning.
After the interrupt is serviced, the saved return address is loaded into the program
counter, and the interrupted computation resumes as though the interrupt had not
occurred.
1.2.1.2 Implementation
The basic interrupt mechanism works as follows. The CPU hardware has a wire called
the interrupt-request line that the CPU senses after executing every instruction.
When the CPU detects that a controller has asserted a signal on the interrupt-request
line, it reads the interrupt number and jumps to the interrupt-handler routine by
using that interrupt number as an index into the interrupt vector.
It then starts execution at the address associated with that index. The interrupt
handler saves any state it will be changing during its operation, determines the cause
of the interrupt, performs the necessary processing, performs a state restore, and
executes a return from interrupt instruction to return the CPU to the execution state
prior to the interrupt.
The device controller raises an interrupt by asserting a signal on the interrupt
request line, the CPU catches the interrupt and dispatches it to the interrupt
handler, and the handler clears the interrupt by servicing the device.
In a modern operating system, however, there is the need for more sophisticated
interrupthandling features.
2. Need an efficient way to dispatch to the proper interrupt handler for a device.
3. Need multilevel interrupts, so that the operating system can distinguish between
high- and low-priority interrupts and can respond with the appropriate degree of
urgency.
In modern computer hardware, these three features are provided by the CPU and the
interrupt-controller hardware. Most CPUs have two interrupt request lines. One is the
nonmaskable interrupt, which is reserved for events such as unrecoverable
memory errors.
The second interrupt line is maskable: it can be turned off by the CPU before
the execution of critical instruction sequences that must not be interrupted.
The maskable interrupt is used by device controllers to request service.
In practice, however, computers have more devices (and, hence, interrupt handlers)
than they have address elements in the interrupt vector. A common way to solve this
problem is to use interrupt chaining, in which each element in the interrupt vector
points to the head of a list of interrupt handlers.
When an interrupt is raised, the handlers on the corresponding list are called one by
one, until one is found that can service the request. This structure is a compromise
between the overhead of a huge interrupt table and the inefficiency of dispatching to
a single interrupt handler.
Figure 1.5 illustrates the design of the interrupt vector for Intel processors. The
events from 0 to 31, which are nonmaskable, are used to signal various error
conditions. The events from 32 to 255, which are maskable, are used for purposes
such as device-generated interrupts.
The CPU can load instructions only from memory, so any programs must first be
loaded into memory to run. General-purpose computers run most of their programs
from rewritable memory, called main memory (also called random-access memory, or
RAM).
All forms of memory provide an array of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses. The load instruction moves a byte or word from main memory to
an internal register within the CPU, whereas the store instruction moves the content
of a register to main memory.
Aside from explicit loads and stores, the CPU automatically loads instructions from
main memory for execution from the location stored in the program counter.
1. Main memory is usually too small to store all needed programs and data permanently.
2. Main memory is volatile—it loses its contents when power is turned off or otherwise
lost.
The most common secondary-storage devices are hard-disk drives (HDDs) and
nonvolatile memory (NVM) devices, which provide storage for both programs and
data. Most programs (system and application) are stored in secondary storage until they
are loaded into memory. Many programs then use secondary storage as both the source
and the destination of their processing.
In a larger sense, however, the storage structure that we have described —consisting of
registers, main memory, and secondary storage—is only one of many possible storage
system designs. Other possible components include cache memory, CD-ROM or blu-ray,
magnetic tapes, and so on. Those that are slow enough and large enough that they are
used only for special purposes — to store backup copies of material stored on other
devices, for example— are called tertiary storage.
Each storage system provides the basic functions of storing a datum and holding that
datum until it is retrieved at a later time. The main differences among the various
storage systems lie in speed, size, and volatility.
The wide variety of storage systems can be organized in a hierarchy (Figure 1.6)
according to storage capacity and access time. As a general rule, there is a trade-off
between size and speed, with smaller and faster memory closer to the CPU. As shown in
the figure, in addition to differing in speed and capacity, the various storage systems are
either volatile or nonvolatile.Volatile storage, as mentioned earlier, loses its contents
when the power to the device is removed, so data must be written to nonvolatile
storage for safekeeping.
The top four levels of memory in the figure are constructed using semiconductor
memory, which consists of semiconductor-based electronic circuits. NVM devices, at
the fourth level, have several variants but in general are faster than hard disks. The
most common form of NVM device is flash memory, which is popular in mobile devices
such as smartphones and tablets. Increasingly, flash memory is being used for long-
term storage on laptops, desktops, and servers as well.
•Nonvolatile storage retains its contents when power is lost. It will be referred to as
NVS. The vast majority of the time we spend on NVS will be on secondary storage. This
type of storage can be classified into two distinct types:
◦Mechanical. A few examples of such storage systems are HDDs, optical disks,
holographic storage, and magnetic tape.
Electrical. A few examples of such storage systems are flash memory, FRAM, NRAM,
and SSD. Electrical storage will be referred to as NVM.
Mechanical storage is generally larger and less expensive per byte than electrical
storage. Conversely, electrical storage is typically costly, smaller, and faster than
mechanical storage.
Caches can be installed to improve performance where a large disparity in access time
or transfer rate exists between two components. aches can be installed to improve
performance where a large disparity in access time or transfer rate exists between two
components.
A large portion of operating system code is dedicated to managing I/O, both because of
its importance to the reliability and performance of a system and because of the varying
nature of the devices.
While the device controller is performing these operations, the CPU is available to
accomplish other work. Some high-end systems use switch rather than bus architecture.
On these systems, multiple components can talk to other components concurrently,
rather than competing for cycles on a shared bus. In this case, DMA is even more
effective. Figure 1.7 shows the interplay of all components of a computer system.
3. Computer-System Architecture
A computer system can be organized in several different ways, which we can categorize
roughly according to the number of general-purpose
processors used.
1. Single-Processor Systems
Many years ago, most computer systems used a single processor containing one CPU
with a single processing core.
The core is the component that executes instructions and registers for
storing data locally.
The one main CPU with its core is capable of executing a general-purpose instruction
set, including instructions from processes. These systems have other special-purpose
processors as well.
They may come in the form of device-specific processors, such as disk, keyboard, and
graphics controllers. All of these special-purpose processors run a limited instruction set
and do not run processes.
The operating system cannot communicate with these processors; they do their jobs
autonomously.
The use of special-purpose microprocessors is common and does not turn a single-
processor system into a multiprocessor.
If there is only one general-purpose CPU with a single processing core, then the system
is a single-processor system. Very few contemporary computer systems are single-
processor systems.
1.3.2 Multiprocessor Systems
Traditionally, such systems have two (or more) processors, each with a single-core
CPU. The processors share the computer bus and sometimes the clock, memory, and
peripheral devices. The primary advantage of multiprocessor systems is
increased throughput.
That is, by increasing the number of processors, we expect to get more work done in
less time. The speed-up ratio with N processors is not N, however; it is less than N.
Figure 1.8 illustrates a typical SMP architecture with two processors, each with its
own CPU. Each CPU processor has its own set of registers, as well as a private—or
local— cache.
However, all processors share physical memory over the system bus.
The benefit of this model is that many processes can run simultaneously —N
processes can run if there are N CPUs—without causing performance to deteriorate
significantly. However, since the CPUs are separate, one may be sitting idle while
another is overloaded, resulting in inefficiencies.
These inefficiencies can be avoided if the processors share certain data structures. A
multiprocessor system of this form will allow processes and resources—such as
memory— to be shared dynamically among the various processors and can lower the
workload variance among the processors.
Figure 1.9 shows a dual-core design with two cores on the same processor chip. In
this design, each core has its own register set, as well as its own local cache, often
known as a level 1, or L1, cache.
A level 2 (L2) cache is local to the chip but is shared by the two processing cores.
Most architectures adopt this approach, combining local and shared caches, where
local, lower-level caches are generally smaller and faster than higher-level shared
caches.
An alternative approach is instead to provide each CPU (or group of CPUs) with its
own local memory that is accessed via a small, fast local bus.
The CPUs are connected by a shared system interconnect, so that all CPUs share
one physical address space. This approach—known as non-uniform memory access,
or NUMA—is illustrated in Figure 1.10.
The advantage is that, when a CPU accesses its local memory, not only is it fast,
but there is also no contention over the system interconnect.
Thus, NUMA systems can scale more effectively as more processors are added. A
potential drawback with a NUMA system is increased latency when a CPU must
access remote memory across the system interconnect, creating a possible
performance penalty.
Reference Video
Multiprocessing OS
https://youtu.be/IZfWjg3U3mA
Blade servers are systems in which multiple processor boards, I/O boards, and
networking boards are placed in the same chassis.
The difference between these and traditional multiprocessor systems is that each
bladeprocessor board boots independently and runs its own operating system. Some
blade-server boards are multiprocessor as well, which blurs the lines between types
of computers. In essence, these servers consist of multiple independent
multiprocessor systems.
1.3.3 Clustered Systems
Clustered computers share storage and are closely linked via a local-area
network LAN or a faster interconnect, such as InfiniBand. Clustering is usually used to
provide high-availability service— that is, service that will continue even if one or
more systems in the cluster fail.
A layer of cluster software runs on the cluster nodes. Each node can monitor
one or more of the others (over the network). If the monitored machine fails, the
monitoring machine can take ownership of its storage and restart the applications that
were running on the failed machine.
The users and clients of the applications see only a brief interruption of
service. High availability provides increased reliability, which is crucial in many
applications. The ability to continue providing service proportional to the level of
surviving hardware is called graceful degradation. Some systems go beyond graceful
degradation and are called fault tolerant, because they can suffer a failure of any
single component and still continue operation. Fault tolerance requires a mechanism
to allow the failure to be detected, diagnosed, and, if possible, corrected.
In symmetric clustering, two or more hosts are running applications and are
monitoring each other. This structure is obviously more efficient, as it uses all of the
available hardware. However, it does require that more than one application be available
to run.n parallel on individual cores in a computer or computers in a cluster.
Each machine has full access to all data in the database. To provide this
shared access, the system must also supply access control and locking to ensure that no
conflicting operations occur. This function, commonly known as a distributed lock
manager (DLM), is included in some cluster technology.
Reference Video
Therea are two separate modes of operation: user mode and kernel
mode (also called supervisor mode, system mode, or privileged mode). A
bit, called the mode bit, is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
With the mode bit, we can distinguish between a task that is executed
on behalf of the operating system and one that is executed on behalf of the user.
When the computer system is executing on behalf of a user application, the system
is in user mode. However, when a user application requests a service from the
operating system (via a system call), the system must transition from user to kernel
mode to fulfill the request. This is shown in Figure 1.13.
At system boot time, the hardware starts in kernel mode. The
operating system is then loaded and starts user applications in user mode.
Whenever a trap or interrupt occurs, the hardware switches from user mode to
kernel mode (that is, changes the state of the mode bit to 0). Thus, whenever the
operating system gains control of the computer, it is in kernel mode. The system
always switches to user mode (by setting the mode bit to 1) before passing control
to a user program.
Timer:
We must ensure that the operating system maintains control over the CPU.
We cannot allow a user program to get stuck in an infinite loop or to fail to call system
services and never return control to the operating system. To accomplish this goal, we
can use a timer.
A timer can be set to interrupt the computer after a specified period. The
period may be fixed (for example, 1/60 second) or variable (for example, from 1
millisecond to 1 second).
Reference Video
An operating system is a resource manager. The system’s CPU, memory space, file-
storage space, and I/O devices are among the resources that the operating system
must manage.
1.5.1 Process Management
• Keeping track of which parts of memory are currently being used and which
process is using them
• Deciding which processes (or parts of processes) and data to move into and out
of memory
The operating system abstracts from the physical properties of its storage
devices to define a logical storage unit, the file. The operating system maps files onto
physical media and accesses these files via the storage devices.
• Free-space management
• Storage allocation
• Disk scheduling
5
4
• Partitioning
• Protection
For instance, most systems have an instruction cache to hold the instructions
expected to be executed next. Without this cache, the CPU would have to wait several
cycles while an instruction was fetched from main memory. Careful selection of the cache
size and of a replacement policy can result in greatly increased performance, as you can
see by examining Figure 1.14.
The movement of information between levels of a storage hierarchy
may be either explicit or implicit, depending on the hardware design and the
controlling operating-system software. For instance, data transfer from cache to
CPU and registers is usually a hardware function, with no operating-system
intervention.
Since the various CPUs can all execute in parallel, one must make sure
that an update to the value of variable in one cache is immediately reflected in all
other caches where the variable resides. This situation is called cache coherency,
and it is usually a hardware issue (handled below the operating-system level).
• Only the device driver knows the peculiarities of the specific device to which it
is assigned.
1.6 Security and Protection
Protection is any mechanism for controlling the access of processes
or users to the resources defined by a computer system. This mechanism must
provide means to specify the controls to be imposed and to enforce the controls.
Protection can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can often prevent
contamination of a healthy subsystem by another subsystem that is malfunctioning.
Furthermore, an unprotected resource cannot defend against use (or misuse) by an
unauthorized or incompetent user.
Reference Video
Every machine-level instruction that runs natively on the source system must
be translated to the equivalent function on the target system, frequently resulting in
several target instructions. If the source and target CPUs have similar performance
levels, the emulated code may run much more slowly than the native code.
With virtualization, in contrast, an operating system that is natively compiled for a
particular CPU architecture runs within another operating system also native to that CPU.
Virtualization first came about on IBM mainframes as a method for multiple users to run
tasks concurrently.
Running multiple virtual machines allowed (and still allows) many users to run
tasks on a system designed for a singe user. Later, in response to problems with running
multiple Microsoft Windows applications on the Intel x86 CPU, VMware created a new
virtualization technology in the form of an application that ran on Windows. That application
ran one or more guest copies of Windows or other native x86 operating systems, each
running its own applications. (See Figure 1.16.).
Windows was the host operating system, and the VMware application was the
virtual machine manager (VMM). The VMM runs the guest operating systems, manages their
resource use, and protects each guest from the others. Even though modern operating
systems are fully capable of running multiple applications reliably, the use of virtualization
continues to grow. On laptops and desktops, a VMM allows the user to install multiple
operating systems for exploration or to run applications written for operating systems other
than the native host.
Companies writing software for multiple operating systems can use
virtualization to run all of those operating systems on a single physical
server for development, testing, and debugging. Within data centers,
virtualization has become a common method of executing and managing computing
environments. VMMs like VMware ESX and Citrix XenServer no longer run on host
operating systems but rather are the host operating systems, providing services and
resource management to virtual machine processes.
Reference Video
Virtualization
https://youtu.be/iBI31dmqSX0
1.8 Operating-System Services
An operating system provides an environment for the execution of
programs. It makes certain services available to programs and to the users of those
programs. The specific services provided, of course, differ from one operating
system to another, but we can identify common classes.
• User interface.
Almost all operating systems have a user interface (UI). This interface
can take several forms.
commands and a method for entering them (say, a keyboard for typing
in commands in a specific format with specific options). Some systems
provide two or all three of these variations.
• Program execution.
The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either
normally or abnormally (indicating error).
• I/O operations.
A running program may require I/O, which may involve a file or an I/O
device. For specific devices, special functions may be desired (such as
reading from a network interface or writing to a file system).
For efficiency and protection, users usually cannot control I/O devices
directly. Therefore, the operating system must provide a means to do
I/O.
• File-system manipulation
They also need to create and delete them by name, search for a given
file, and list file information. Finally, some operating systems include
permissions management to allow or deny access to files or directories
based on file ownership.
• Communications.
Errors may occur in the CPU and memory hardware (such as a memory
error or a power failure), in I/O devices (such as a parity error on disk, a
connection failure on a network, or lack of paper in the printer), and in
the user program (such as an arithmetic overflow or an attempt to
access an illegal memory location).
For each type of error, the operating system should take the appropriate
action to ensure correct and consistent computing.
• Resource allocation.
When there are multiple processes running at the same time, resources
must be allocated to each of them.
There may also be routines to allocate printers, USB storage drives, and
other peripheral devices.
• Logging.
We want to keep track of which programs use how much and what kinds
of computer resources.
This record keeping may be used for accounting (so that users can be
billed) or simply for accumulating usage statistics.
Reference Video
• The approaches for users to interface with the operating system are -
command-line interface, or command interpreter, that allows users to
directly enter commands to be performed by the operating system. The other
two allow users to interface with the operating system via a graphical user
interface, or GUI.
• Most operating systems, including Linux, UNIX, and Windows, treat the
command interpreter as a special program that is running when a
process is initiated or when a user first logs on (on interactive systems).
On systems with multiple command interpreters to choose from, the
interpreters are known as shells.
• For example, on UNIX and Linux systems, a user may choose among
several different shells, including the C shell, Bourne-Again shell, Korn
shell, and others. Third-party shells and free user-written shells are also
available.
• Most shells provide similar functionality, and a user’s choice of which shell to
use is generally based on personal preference. Figure 1.18 shows the Bourne-
Again (or bash) shell command interpreter being used on macOS.
Depending on the mouse pointer’s location, clicking a button on the mouse can
invoke a program, select a file or directory—known as a folder—or pull down a menu
that contains commands.
The first GUI appeared on the Xerox Alto computer in 1973. However, graphical
interfaces became more widespread with the advent of Apple Macintosh computers in
the 1980s. The user interface for the Macintosh operating system has undergone
various changes over the years, the most significant being the adoption of the Aqua
interface that appeared with macOS.
Microsoft’s first version of Windows— Version 1.0—was based on the
addition of a GUI interface to the MS-DOS operating system.
Figure 1.19 illustrates the touch screen of the Apple iPhone. Both the iPad and the
iPhone use the Springboard touch-screen interface.
In contrast, most Windows users are happy to use the Windows GUI
environment and almost never use the shell interface. Recent versions of the
Windows operating system provide both a standard GUI for desktop and traditional
laptops and a touch screen for tablets. The various changes undergone by the
Macintosh operating systems also provide a nice study in contrast. Historically, Mac
OS has not provided a command-line interface, always requiring its users to interface
with the operating system using its GUI. However, with the release of macOS (which
is in part implemented using a UNIX kernel), the operating system now provides both
an Aqua GUI and a command-line interface.
Almost all users of mobile systems interact with their devices using the
touch-screen interface. The user interface can vary from system to system and even
from user to user within a system; however, it typically is substantially removed from
the actual system structure. The design of a useful and intuitive user interface is
therefore not a direct function of the operating system.
1.10 System calls
Example
Let’s use an example to learn how system calls are used: writing a simple
program to read data from one file and copy them to another file. The first input that
the program will need is the names of the two files: the input file and the output file.
cp in.txt out.txt
This command copies the input file in.txt to the output file out.txt. A second approach
is for the program to ask the user for the names.
1.10.1 Application Programming Interface
Even simple programs may make heavy use of the operating system.
Frequently, systems execute thousands of system calls per second. Most
programmers never see this level of detail, however. Typically, application developers
design programs according to an application programming interface (API). The
API specifies a set of functions that are available to an application
programmer, including the parameters that are passed to each function and
the return values the programmer can expect.
Figure 1.20 - The handling of a user application onvoking the open() system call
Another important factor in handling system calls is the run-time
environment (RTE)— the full suite of software needed to execute applications
written in a given programming language, including its compilers or interpreters as
well as other software, such as libraries and loaders. The RTE provides a system-call
interface that serves as the link to system calls made available by the operating
system. The system-call interface intercepts function calls in the API and invokes the
necessary system calls within the operating system.
Reference Video
System Calls
https://youtu.be/lhToWeuWWfw
1.10.2 Types of System Calls
System calls can be grouped roughly into six major categories:
process control, file management, device management,
information maintenance, communications, and protection.
1.11 System Services
Another aspect of a modern system is its collection of system services. At
the lowest level is hardware. Next is the operating system, then the system
services, and finally the application programs. System services, also known as system
utilities, provide a convenient environment for program development and execution.
Some of them are simply user interfaces to system calls. Others are considerably
more complex. They can be divided into these categories:
File management. These programs create, delete, copy, rename, print, list, and
generally access and manipulate files and directories.
Status information. Some programs simply ask the system for the date, time,
amount of available memory or disk space, number of users, or similar status
information. Others are more complex, providing detailed performance, logging, and
debugging information. Typically, these programs format and print the output to the
terminal or other output devices or files or display it in a window of the GUI. Some
systems also support a registry, which is used to store and retrieve configuration
information.
File modification Several text editors may be available to create and modify the
content of files stored on disk or other storage devices. There may also be special
commands to search contents of files or perform transformations of the text.
1. Design Goals
The first problem in designing a system is to define goals and specifications. At the
highest level, the design of the system will be affected by the choice of hardware and
the type of system: traditional desktop/laptop, mobile, distributed, or real time.
Beyond this highest design level, the requirements may be much harder to specify.
The requirements can be divided into two basic groups: user goals and system
goals.
These specifications are not particularly useful in the system design, since there is
no general agreement on how to achieve them.
There is, in short, no unique solution to the problem of defining the requirements
for an operating system. The wide range of systems in existence shows that different
requirements can result in a large variety of solutions for different environments.
1.12.2 Mechanisms and Policies
• Policies are likely to change across places or over time. In the worst case, each
change
Whenever the question is how rather than what, it is a mechanism that must be
determined.
1.12.3 Implementation
Android provides a nice example: its kernel is written mostly in C with some
assembly language. Most Android system libraries are written in C or C++, and its
application frameworks—which provide the developer interface to the system—are
written mostly in Java.
Reference Video
1. Operating-System Generation
But suppose you wish to replace the preinstalled operating system or add
additional operating systems. Or suppose when one purchase a computer without an
operating system. Then there are few options for placing the appropriate operating
system on the computer and configuring it for use.
1. Write the operating system source code (or obtain previously written source code).
2. Configure the operating system for the system on which it will run.
Configuring the system involves specifying which features will be included, and this
varies by operating system. Typically, parameters describing how the system is
configured is stored in a configuration file of some type, and once this file is created,
it can be used in several ways.
To build a Linux system from scratch, it is typically necessary to perform
the following steps:
2.Configure the kernel using the “make menuconfig” command. This step generates
the .config configuration file.
3.Compile the main kernel using the “make” command. The make command
compiles the kernel based on the configuration parameters identified in the .config
file, producing the file vmlinuz, which is the kernel image.
4.Compile the kernel modules using the “make modules” command. Just as with
compiling the kernel, module compilation depends on the configuration parameters
specified in the .config file.
5.Use the command “make modules install” to install the kernel modules into
vmlinuz.
6. Install the new kernel on the system by entering the “make install” command.
When the system reboots, it will begin running this new operating system.
Linux virtual machine. This will allow the host operating system (such as
2.Instructed the virtual machine software VirtualBox to use the ISO as the bootable
medium and booted the virtual machine
3. Answered the installation questions and then installed and booted the operating
system as a virtual machine
1.13.2 System Boot
1.A small piece of code known as the bootstrap program or boot loader
The program stored in the boot block may be sophisticated enough to load
the entire operating system into memory and begin its execution. More typically, it is
simple code (as it must fit in a single disk block) and knows only the address on disk
and the length of the remainder of the bootstrap program.
Whether booting from BIOS or UEFI, the bootstrap program can perform a variety of
tasks. In addition to loading the file containing the kernel program into memory, it
also runs diagnostics to determine the state of the machine — for example,
inspecting memory and the CPU and discovering devices.
If the diagnostics pass, the program can continue with the booting steps.
The bootstrap can also initialize all aspects of the system, from CPU registers to
device controllers and the contents of main memory. Sooner or later, it starts the
operating system and mounts the root file system. It is only at this point is the
system said to be running.
As an example, the following are kernel parameters from the special Linux
file /proc/cmdline, which is used at boot time:
BOOT IMAGE is the name of the kernel image to be loaded into memory,
and root specifies a unique identifier of the root file system.
To save space as well as decrease boot time, the Linux kernel image is a
compressed file that is extracted after it is loaded into memory. During the boot
process, the boot loader typically creates a temporary RAM file system, known as
initramfs.
The status of the current activity of a process is represented by the value of the
program counter and the contents of the processor’s registers. The memory layout
of a process is typically divided into multiple sections, and is shown in figure 1.21.
Reference Video
Structure of Process
https://youtu.be/grriYn6v76g
2. Process State
Waiting. - The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
The state diagram corresponding to these states is shown in below Figure 1.22.
• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
•Accounting information. This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
•I/O status information. This information includes the list of I/O devices allocated
to the process, a list of open files, and so on.
The PCB simply serves as the repository for all the data needed to start,
or restart, a process, along with some accounting data.
(possibly from a set of several available processes) for program execution on a core.
Each CPU core can run one process at a time. For a system with a single CPU core,
there will never be more than one process running at a time, whereas a
If there are more processes than cores, excess processes will have to wait until
multiprogramming.
An I/O-bound process is one that spends more of its time doing I/O than it spends
doing computations.
As processes enter the system, they are put into a ready queue, where they are
This queue is generally stored as a linked list; a ready-queue header contains pointers
to the first PCB in the list, and each PCB includes a pointer field that points to the next
When a process is allocated a CPU core, it executes for a while and eventually
terminates, is interrupted, or waits for the occurrence of a particular event, such
as the completion of an I/O request. Suppose the process makes an I/O request to
a device such as a disk. Since devices run significantly slower than processors, the
process will have to wait for the I/O to become available. Processes that are waiting for
a certain event to occur — such as completion of I/O — are placed in a wait queue.
(figure 1.24).
The circles represent the resources that serve the queues, and the arrows indicate the
flow of processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected for
execution, or dispatched. Once the process is allocated a CPU core and is executing,
one of several events could occur:
The process could issue an I/O request and then be placed in an I/O wait
queue.
The process could create a new child process and then be placed in a wait
queue while it awaits the child’s termination.
The process could be removed forcibly from the core, as a result of an
interrupt or having its time slice expire, and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to
the ready state and is then put back in the ready queue. A process
continues this cycle until it terminates, at which time it is removed from all queues
and has its PCB and resources deallocated.
its lifetime.
The role of the CPU scheduler is to select from among the processes that are in
the ready queue and allocate a CPU core to one of them. The CPU scheduler
Although a CPU-bound process will require a CPU core for longer durations, the
Instead, it is likely designed to forcibly remove the CPU from a process and
Therefore, the CPU scheduler executes at least once every 100 milliseconds,
process from memory (and from active contention for the CPU) and thus reduce
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is known as swapping because a process
can be “swapped out” from memory to disk, where its current status is saved,
and later “swapped in” from disk back to memory, where its status is restored.
Swapping is typically only necessary when memory has been overcommitted and
process running on the CPU core so that it can restore that context when its
processing is done, essentially suspending the process and then resuming it.
The context is represented in the PCB of the process. It includes the value of the
Switching the CPU core to another process requires performing a state save of
the current process and a state restore of a different process. This task is
When a context switch occurs, the kernel saves the context of the old process in its
PCB and loads the saved context of the new process scheduled to run.
Context_x0002_
switch time is pure overhead, because the system does no useful work while
memory speed, the number of registers that must be copied, and the existence of
Reference Video
Context Switching
https://youtu.be/w_YCKF323ns
16. OPERATIONS ON PROCESSES
The processes in most systems can execute concurrently, and they may be created
1. Process Creation
A process may create several new processes. The creating process is called a parent
process, and the new processes are called the children of that process.
Each of these new processes may in turn create other processes, forming a tree of
processes.
Most operating systems (including UNIX, Linux, and Windows) identify processes
number.
The pid provides a unique value for each process in the system, and it can be used
Once the system has booted, the init process can also create various other
user processes. The children of init are kthreadd and sshd.
In general, when a process creates a child process, that child
process will need certain resources (CPU time, memory, files, I/O
devices) to accomplish its task.
A child process may be able to obtain its resources directly from
the operating system, or it may be constrained to a subset of the resources
of the parent process. The parent may have to partition its resources among its
children, or it may be able to share some resources (such as memory or files)
among several of its children. When a process creates a new process, two
possibilities for execution exist:
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the same
program and data as the parent).
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call.
At that point, the process may return a status value (typically an integer) to its
parent process (via the wait() system call).
All the resources of the process—including physical and virtual memory, open files,
and I/O buffers are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety
of reasons, such as these:
The child has exceeded its usage of some of the resources that it has been
allocated.
The parent is exiting, and the operating system does not allow a child to
continue if its parent terminates.
Some systems do not allow a child to exist if its parent has terminated.
In such systems, if a process terminates (either normally or abnormally), then
all its children must also be terminated. This phenomenon, referred to as
cascading termination, is normally initiated by the operating system.
To illustrate process execution and termination, consider that, in Linux and UNIX
systems, we can terminate a process by using the exit() system call, providing an exit
status as a parameter:
exit(1);
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait(),
because the process table contains the process’s exit status.
A process that has terminated, but whose parent has not yet called wait(), is known as
a zombie process.
If a parent did not invoke wait() and instead terminated, thereby leaving its child
processes as orphans.
Traditional UNIX systems addressed this scenario by assigning the init process as the
new parent to orphan processes.
The init process periodically invokes wait(), thereby allowing the exit status of any
orphaned process to be collected and releasing the orphan’s process identifier and
process-table entry.
1.16.3 Android Process Hierarchy
importance hierarchy of processes, and when the system must terminate a process to
make resources available for a new, or more important, process, it terminates processes
•Visible process—A process that is not directly visible on the foreground but that is
performing an activity that the foreground process is referring to (that is, a process
• Empty process—A process that holds no active components associated with any
application
•Information sharing. Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to such information.
Modularity. We may want to construct the system in a modular fashion, dividing the
system functions into separate processes or threads
• Convenience. Even an individual user may work on many tasks at the same time.
For instance, a user may be editing, listening to music, and compiling in parallel.
The two communications models are shown in the below figure 1.28
Shared memory can be faster than message passing, since message-passing systems are
typically implemented using system calls and thus require the more time-consuming task
establish shared memory regions. Once shared memory is established, all accesses are
treated as routine memory accesses, and no assistance from the kernel is required.
by a consumer process.
buffer of items that can be filled by the producer and emptied by the consumer. This
buffer will reside in a region of memory that is shared by the producer and consumer
processes.
A producer can produce one item while the consumer is consuming another item.
The producer and consumer must be synchronized, so that the consumer does not try
The shared buffer is implemented as a circular array with two logical pointers: in and
out. The variable in points to the next free position in the buffer; out points to the first
full position in the buffer. The buffer is empty when in == out; the buffer is full when
((in + 1) % BUFFER SIZE) == out.
while (true) {
; /* do nothing */
}
The code for the producer process is shown in the above Figure, and the code for the
consumer process is shown in the below Figure . The producer process has a local
variable next produced in which the new item to be produced is stored. The consumer
process has a local variable next consumed in which the item to be consumed is
stored.
while (true) {
; /* do nothing */
send(message) receive(message)
Naming
Processes that want to communicate must have a way to refer to each other. They can
use either direct or indirect communication. Under direct communication, each process
that wants to communicate must explicitly name the recipient or sender of the
communication. In this scheme, the send() and receive() primitives are defined as:
This scheme exhibits symmetry in addressing; that is, both the sender process and
the receiver process must name the other to communicate. A variant of this scheme
employs asymmetry in addressing. Here, only the sender names the recipient; the
recipient is not required to name the sender. In this scheme, the send() and receive()
primitives are defined as follows:
send(P, message)—Send a message to process P.
With indirect communication, the messages are sent to and received from
mailboxes, or ports. A mailbox can be viewed abstractly as an object into which
messages can be placed by processes and from which messages can be removed.
Each mailbox has a unique identification. A process can communicate with
another process via a number of different mailboxes, but two processes can
communicate only if they have a shared mailbox.
Now suppose that processes P1, P2, and P3 all share mailbox A. Process P1 sends
a message to A, while both P2 and P3 execute a receive() from A. Which process
will receive the message sent by P1? The answer depends on which of the
following methods we choose:
Allow the system to select arbitrarily which process will receive the message (that
is, either P2 or P3, but not both, will receive the message). The system may
define an algorithm for selecting which process will receive the message
Synchronization
Blocking send. The sending process is blocked until the message is received by
the receiving process or by the mailbox.
Nonblocking send. The sending process sends the message and resumes
operation.
Buffering
Zero capacity. The queue has a maximum length of zero; thus, the link cannot
have any messages waiting in it. In this case, the sender must block until the
recipient receives the message.
Bounded capacity. The queue has finite length n; thus, at most n messages
can reside in it. If the queue is not full when a new message is sent, the message
is placed in the queue, and the sender can continue execution without waiting.
The link’s capacity is finite, however. If the link is full, the sender must block until
space is available in the queue.
buffering. The other cases are referred to as systems with automatic buffering.
Assignment
Assignment
Give short description of version, Year of publishing, Commercial/ Open Source,
Features, processor configuration and memory settings (with a screenshot) for the
Operating System installed in your
I) Laptop/PC
PART -A
When a block of data is fetched into the cache to satisfy a single memory
reference, it is likely that many of the near-future memory references will be to other
bytes in the block. This phenomenon is called as locality of reference.
A system call is a routine which acts as an interface between the user mode and
the kernel mode.
In Multiprogramming when one job needs to wait for I/O, the processor
can switch to the other job.
Multiprogramming allows using the CPU effectively by allowing various users to use
the CPU and I/O devices effectively. Multiprogramming makes sure that the CPU
always has something to execute, thus increases the CPU utilization. On the other
hand, Time sharing is the sharing of computing resources among several users at
the same time. The CPU executes multiple jobs by switching among them, but the
switches occur so frequently that the users can interact with each program while it
is running.
1
0
5
PART -A
16) Why API’s need to be used rather than system calls? (CO1,K2)
System calls differ from platform to platform. By using a API, it is easier to migrate
your software to different platforms.
The API usually provides more useful functionality than the system call directly. For
example the 'fork' API includes tons of code beyond just making the 'fork' system call. So
does 'select'.
The API can support multiple versions of the operating system and detect which version it
needs to use at run time.
System programs can be thought of as bundles of useful system calls. They provide basic
function ability to users so that users do not need to write their own programs to solve
common problems.
As a process executes, it changes state. A process may be in one of the following states:
Each process is represented in the operating system by a process control block (PCB)—
also called a task control block. A PCB includes the following:
Process state
Program counter
CPU registers
CPU-scheduling information
Memory-management information
Accounting information.
A thread is a light weight process involved in the execution. A thread is a basic unit of
CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack.
Single thread of control allows the process to perform only one task at a time.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch.
Some systems do not allow a child to exist if its parent has terminated. In such systems,
if a process terminates, then all its children must also be terminated. This phenomenon,
referred to as cascading termination.
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait(). A
process that has terminated, but whose parent has not yet called wait(), is known as a
zombie process.
7. Explain about the shared memory model and Message passing system? (13)
(CO1,K2)
14. Write short notes about Protection and Security of the operating system.
(CO1,K2)
Course
S No Course title Link
provider
https://www.udemy.co
m/course/operating-
Operating Systems from systems-from-scratch-
1 Udemy
scratch - Part 1 part1/
https://www.udacity.co
https://www.coursera.o
Operating Systems and You: rg/learn/os-power-user
3 Coursera Becoming a Power User
https://www.edx.org/c
ourse/computer-
edX Computer Hardware and hardware-and-
4 Operating Systems
operating-systems
Real life Applications in
day to day life and to
Industry
1
3
AND TO INDUSTRY
(K4, CO1)
2.Explain the role of an operating system in providing real time information on stock
https://www.youtube.com/watch?v=NYBKXzl5bW
U
Assessment Schedule
ASSESSMENT SCHEDULE
Name of the
S.NO Start Date End Date Portion
Assessment
27.02.2023 04.03.2023
1 FIRST INTERNAL UNIT 1 & 2
ASSESSMENT
18.04.2023 25.04.2023
2 SECOND INTERNAL UNIT 3 &4
ASSESSMENT
11.05.2023 20.05.2023
3 MODEL EXAMINATION ALL 5 UNITS
01.06.2023 15.06.2023
4 END SEMESTER ALL 5 UNITS
EXAMINATION
Prescribed Text books &
Reference books
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS
TEXT BOOKS
Silberschatz Abraham, Greg Gagne, Peter B. Galvin. “Operating
System Concepts”, Tenth Edition, Wiley, 2018. [EBOOK]
REFERENCE BOOKS
1.William Stallings, Operating Systems – Internals and Design
Principles, Pearson Education, New Delhi, 2018.
2.Achyut S.Godbole, Atul Kahate, Operating Systems‖, McGraw Hill
Education, 2016.
Andrew S. Tanenbaum, "Modern Operating System", 4 th Edition,
PHI Learning, New Delhi, 2018.
Mini Project
Suggestions
MINI PROJECT SUGGESTIONS
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
relianceon the contents of this information is strictly prohibited.