unit 1
unit 1
unit 1
BASIC ELEMENTS
At a top level, a computer consists of processor, memory,
and I/O components, with one or more modules of each type.
These components are interconnected in some fashion to
achieve the main function of the computer, which is to
execute programs.
COMPUTER SYSTEM OVERVIEW
I/O modules:
Move data between the computer and its external
environment. The external environment consists of a variety of
devices, including secondary memory devices (e.g., disks),
communications equipment, and terminals.
System bus:
Provides for communication among processors, main
memory, and I/O modules.
OPERATING SYSTEM OBJECTIVES AND FUNCTIONS
1. Process Management
• A program is a set of logical instructions given to the
computer.
• A program that is in an execution state is called a process.
• A process needs certain resources-such as CPU time,
memory, files and I/O devices to accomplish its tasks. These
resources are allocated to the process either when they are
created or while they are executing.
• The operating system helps in the allocation of resources to
each process.
• Each process is allowed to use the CPU for a limited time. It
must then give up control and thus becomes suspended until
its next turn.
Functions of Operating-System
• To maximize CPU utilization and allow multiple processes to
run, process scheduling is performed by the OS.
• The operating system is responsible for creation, deletion,
and scheduling of various processes that are being executed
at any point of time.
2. Memory Management
• A computer program remains in main (RAM) memory during its
execution.
• To improve CPU usage several processes are being executed
simultaneously in the memory.
• The OS keeps track of every memory location, that is either
assigned to some process or is free.
• It also checks how much memory should be assigned to each
process.
Functions of Operating-System
5. User Interface
6. Network Management
• An Operating System is responsible for the computer
system networking via a distributed environment.
• A distributed system is a collection of processors, which do
not share memory, clock pulse or any peripheral devices.
• Each processor is having its own clock pulse, and RAM and
they communicate through network.
• Access to shared resource permits increased speed,
increased functionality and enhanced reliability.
• Various networking protocols are TCP/IP(Transmission
Control Protocol/ Internet Protocol), UDP (User Datagram
Protocol), FTP (File Transfer Protocol), HTTP (Hyper Text
Transfer protocol), NFS (Network File System) etc.
Operating-System Services
Program development:
• File-System management
– Files usually organized into directories
– Access control on most systems to determine who can
access what?
– OS activities include
• Creating and deleting files and directories
• Primitives to manipulate files and dirs
• Mapping files onto secondary storage
• Backup files onto stable (non-volatile) storage media
Operating-System Services
Communications
1. Serial Processing
• From the late 1940s to the mid-1950s, the programmer
interacted directly with the computer hardware; there was
no OS.
• These computers were run from a console consisting of
display lights, toggle switches, some form of input device,
and a printer.
• Programs in machine code were loaded via the input device
(e.g. a card reader).
• If an error halted the program, the error condition was
indicated by the lights.
• If the program proceeded to a normal completion, the out-
put appeared on the printer.
The Evolution of OS
1. Serial Processing
Scheduling:
• Most installations used a hardcopy sign-up sheet to reserve
computer time.
• A user could sign up for a block of time in multiples of a half
hour or so.
• A user might sign up for an hour and finish in 45 minutes;
this would result in wasted computer processing time.
• Or the user might run into problems, not finish in the allotted
time, and be forced to stop before resolving the problem.
The Evolution of OS
1. Serial Processing
Setup time:
• A single program, called a job , could involve loading the
compiler plus the high-level language program (source
program) into memory, saving the compiled program (object
program) and then loading and linking together the object
program and common functions.
• Each of these steps could involve mounting or dismounting
tapes or setting up card decks.
• If an error occurred, the hapless user typically had to go back
to the beginning of the setup sequence. Thus, a considerable
amount of time was spent just in setting up the program to
run.
The Evolution of OS
2. Simple Batch Systems
•Memory protection:
While the user program is executing, it must not alter the
memory area containing the monitor.
If such an attempt is made, the processor hardware should
detect an error and transfer control to the monitor.
The monitor would then abort the job, print out an error
message, and load in the next job.
The Evolution of OS
2. Simple Batch Systems
• Timer:
• A timer is used to prevent a single job from monopolizing
the system. The timer is set at the beginning of each job.
• If the timer expires, the user program is stopped, and
control returns to the monitor.
Privileged instructions:
• Privileged instructions are machine level instructions and can
be executed only by the monitor.
• If the processor encounters such an instruction while
executing a user program, an error occurs causing control to
be transferred to the monitor.
Interrupts:
• This feature gives the OS more flexibility in relinquishing
control to, and regaining control from, user programs.
The Evolution of OS
2. Simple Batch Systems
Modes of operation
A user program executes in a user mode, in which certain areas of
memory are protected from the user’s use, and in which certain
instructions may not be executed.
The monitor executes in a system mode (kernel mode) in which
privileged instructions may be executed, and in which protected areas
of memory may be accessed.
With a batch OS, processor time alternates between execution of
user programs and execution of the monitor. There have been two
sacrifices:
Some main memory is now given over to the monitors and
some processor time is consumed by the monitor.
Both of these are forms of overhead still the simple batch system
improves utilization of the computer.
The Evolution of OS
2. Simple Batch Systems
The Evolution of OS
3. Multiprogrammed Systems
The Evolution of OS
3. Multiprogrammed Systems
Multiprogrammed Batch Systems
Even with the automatic job sequencing provided by a simple batch OS,
the processor is often idle. The problem is I/O devices are slow
compared to the processor.
System Utilization Example
The computer spends over 96% of its time waiting for I/O devices to
finish transferring data to and from the file.
The Evolution of OS
3. Multiprogrammed Systems
Time-Sharing Systems
With the use of multiprogramming, batch processing is quite efficient
but for many jobs, it is desirable to provide a mode in which the user
interacts directly with the computer. Hence an interactive mode is
essential.
Multiprogramming allows the processor to handle multiple batch jobs
at a time, multiprogramming can also be used to handle multiple
interactive jobs, this technique is referred to as time sharing, because
processor time is shared among multiple users.
• In a time-sharing system, multiple users simultaneously access the
system through terminals, with the OS interleaving the execution of
each user program in a short burst or quantum of computation.
• If there are n users actively requesting service at one time, each
user will only see on the average 1/n of the effective computer
capacity, not counting OS overhead.
The Evolution of OS
1. Simple Structure
Important:
What is a kernel ?
• When our computer is running in kernel mode, all the
permissions are available.
• We can think of it as an administrator. In macOS, this is known
as giving ‘root’ access.
• In Windows, you invoke this by running applications as
‘administrator.’
Working
Layer Function
5 The operator
4 User Programs
3 Input / Output Management
2 Operator-process communication
1 Memory and drum management
0 Processor allocation and multiprogramming
3. Layered Systems Structure in
Operating Systems
Working
Modular Structure
• Most UNIX kernels are monolithic i.e. it includes virtually all
of the OS functionality in one large block of code that runs as
a single process with a single address space.
• All the functional components of the kernel have access to all
of its internal data structures and routines.
• If changes are made to any portion of a typical monolithic OS,
all the modules and routines must be relinked and reinstalled,
and the system rebooted, before the changes can take effect.
• Hence, any modification, such as adding a new device driver
or file system function, is difficult.
• Linux is structured as a collection of modules, a number of
which can be automatically loaded and unloaded on demand
referred to as loadable modules.
LINUX
1. Dynamic linking:
• A kernel module can be loaded and linked into the kernel
while the kernel is already in memory and executing.
• A module can also be unlinked and removed from memory at
any time.
2. Stackable modules:
• The modules are arranged in a hierarchy.
• Individual modules serve as libraries when they are referenced
by client modules higher up in the hierarchy, and as clients
when they reference modules further down.
With stackable modules, dependencies between modules can
be defined. This has two benefits:
1. Code common to a set of similar modules (e.g., drivers for
similar hardware) can be moved into a single module,
reducing replication.
2. The kernel can make sure that needed modules are present,
refraining from unloading a module on which other running
modules depend, and loading any additional required modules
when a new module is loaded.
LAYERS IN LINUX SYSTEM
LAYERS IN LINUX SYSTEM
2. The Linux kernel: The core of the OS. It’s software residing in
memory that tells the CPU what to do. The kernel is responsible
for maintaining all the important abstractions of the operating
system, including such things as virtual memory and processes.
LAYERS IN LINUX SYSTEM
Graphical Shells
There are several shells are available for Linux systems like –
Shell Scripting
• Shells are interactive that mean, they accept command as
input from users and execute them.
• However some time we want to execute a bunch of
commands routinely, so we have type in all commands each
time in terminal.
• As shell can also take commands as input from file we can
write these commands in a file and can execute them in shell
to avoid this repetitive work.
• These files are called Shell Scripts or Shell Programs. Shell
scripts are similar to the batch file in MS-DOS.
• Each shell script is saved with .sh file extension eg.
myscript.sh
SHELL
Shell Scripting