Unit One
Unit One
Unit One
OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair resource use
OS is a control program
Controls execution of programs to prevent errors and improper use of the computer
Process Management Activities The operating system is responsible for the following activities in
connection with process management:
Creating and deleting both user and system processes
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Providing mechanisms for deadlock handling
Memory Management
To execute a program all (or part) of the instructions and data that is needed must be in
memory.
Memory management determines what is in memory and when and thus optimizing CPU
utilization and computer response to users.
Memory management activities
Keeping track of which parts of memory are currently being used and by whom
Deciding which processes (or parts thereof) and data to move into and out of memory
Allocating and deallocating memory space as needed
Storage Management
OS provides uniform, logical view of information storage
Abstracts physical properties to logical storage unit - file
Each medium is controlled by device (i.e., disk drive, tape drive) Varying properties include
access speed, capacity, data-transfer rate, access method (sequential or random)
File-System management
Files usually organized into directories
Access control on most systems to determine who can access what
OS activities include Creating and deleting files and directories
Primitives to manipulate files and directories
Mapping files onto secondary storage
Backup files onto stable (non-volatile) storage media
Device Management
Usually disks used to store data that does not fit in main memory or data that must be
kept for a “long” period of time
Proper management is of central importance
Entire speed of computer operation hinges on disk subsystem and its algorithms
OS activities : - Free-space management Storage allocation Disk scheduling
Some storage need not be fast Tertiary storage includes optical storage, magnetic tape
o Still must be managed – by OS or applications
o Varies between WORM (write-once, read-many-times) and RW (read-write)
Operating systems provide an environment for execution of programs and services to programs and
users
One set of operating-system services provides functions that are helpful to the user:
User interface - Almost all operating systems have a user interface (UI).
Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch
Program execution - The system must be able to load a program into memory and to run that
program, end execution, either normally or abnormally (indicating error)
I/O operations - A running program may require I/O, which may involve a file or an I/O
device
File-system manipulation - Programs need to read and write files and directories, create and
delete them, search them, list file Information, permission management.
Communications – Processes may exchange information, on the same computer or between
computers over a network Communications may be via shared memory or through message
passing (packets moved by the OS)
Error detection – OS needs to be constantly aware of possible errors May occur in the CPU
and memory hardware, in I/O devices, in user program
o For each type of error, OS should take the appropriate action to ensure correct and
consistent computing
o Debugging facilities can greatly enhance the user’s and programmer’s abilities to
efficiently use the system
Another set of OS functions exists for ensuring the efficient operation of the system itself via resource
sharing
Resource allocation - When multiple users or multiple jobs running concurrently, resources
must be allocated to each of them Many types of resources - CPU cycles, main memory, file
storage, I/O devices.
Accounting - To keep track of which users use how much and what kinds of computer
resources
Protection and security - The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes should
not interfere with each other
o Protection involves ensuring that all access to system resources is controlled
o Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts
System Components
• Process Management
A process is only ONE instant of a program in execution. There are many processes can be
running the same program. The five major activities of an operating system in regard to
process management are:
• Creation and deletion of user and system processes.
• Suspension and resumption of processes.
• A mechanism for process synchronization.
• A mechanism for process communication.
• A mechanism for deadlock handling.
• Main-Memory Management
Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main
memory is a repository of quickly accessible data shared by the CPU and I/O devices. The major
activities of an operating system in regard to memory-management are:
Keep track of which part of memory are currently being used and by whom.
Decide which processes are loaded into memory when memory space becomes
available.
Allocate and deallocate memory space as needed.
• File Management
A file is a collected of related information defined by its creator. Computer can store files on the disk
(secondary storage), which provide long term storage.
• The creation and deletion of files.
• The creation and deletion of directions.
• The support of primitives for manipulating files and directions.
• The mapping of files onto secondary storage.
• The backup of files on stable storage media.
• Networking
A distributed system is a collection of processors that do not sharememory, peripheral
devices, or a clock. The processors communicate with one another through communication
lines called network.
• Protection System
Protection refers to mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer system.
A computer is a set of resources for the movement, storage, and processing of data and for the control
of these functions. The OS is responsible for managing these resources.
• The OS functions in the same way as ordinary computer software; that is, it is a
program or suite of programs executed by the processor.
• The OS frequently relinquishes control and must depend on the processor to
allow it to regain control.
Like other computer programs, the OS provides instructions for the processor. The key difference is in
the intent of the program. The OS directs the processor in the use of the other system resources and in
the timing of its execution of other programs. But in order for the processor to do any of these things,
it must cease executing the OS program and execute other programs. Thus, the OS relinquishes
control for the processor to do some “useful” work and then resumes control long enough to prepare
the processor to do the next piece of work.
A portion of the OS is in main memory. This includes the kernel, or nucleus, which contains themost
frequently used functions in the OS and, at a given time, other portions of the OS currently in use. The
remainder of main memory contains user programs and data. The allocation of this resource (main
memory) is controlled jointly by the OS and memory management hardware in the processor.The OS
decides when an I/O device can be used by a program in execution and controls access to and use of
files. The processor itself is a resource, and the OS must determine how much processor time is to be
devoted to the execution of a particular user program. In the case of a multiple-processor system, this
decision must span all of the processors.
Serial Processing
With the earliest computers, from the late 1940s to the mid-1950s, the programmer interacted directly
with the computer hardware; there was no OS. These computers were run from a console consisting of
display lights, toggle switches, some form of input device, and a printer. Programs in machine code
were loaded via the input device (e.g., a card reader). If an error halted the program, the error
condition was indicated by the lights. If the program proceeded to a normal completion, the output
appeared on the printer.
These early systems presented two main problems:
• Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time. Typically, a
user could sign up for a block of time in multiples of a half hour or so. A user might sign up for an
hour and finish in 45 minutes; this would result in wasted computer processing time. On the other
hand, the user might run into problems, not finish in the allotted time, and be forced to stop before
resolving the problem.
• Setup time: A single program, called a job, could involve loading the compiler plus the high-level
language program (source program) into memory, saving the compiled program (object program) and
then loading and linking together the object program and common functions. Each of these steps
could involve mounting or dismounting tapes or setting up card decks. If an error occurred, the
hapless user typically had to go back to the beginning of the setup sequence. Thus, a considerable
amount of time was spent just in setting up the program to run.
• Processor point of view: At a certain point, the processor is executing instructions from the portion
of main memory containing the monitor. These instructions cause the next job to be read into another
portion of main memory. Once a job has been read in, the processor will encounter a branch
instruction in the monitor that instructs the processor to continue execution at the start of the
user program. The processor will then execute the instructions in the user program until it encounters
an ending or error condition. Either event causes the processor to fetch its next instruction from the
monitor program.
Interrupt
Processing
Device
Monitor Drivers
Job
Sequencing
Control language
Boundary Interpreter
User
Program
Area
The monitor performs a scheduling function. A batch of jobs is queued up, and jobs are executed as
rapidly as possible, with no intervening idle time. The monitor improves job setup time as well. With
each job, instructions are included in a primitive form of job control language (JCL). This is a special
type of programming language used to provide instructions to the monitor.
The monitor, or batch operating system, is simply a computer program. It relies on the ability of the
processor to fetch instructions from various portions of main memory to alternately seize and
relinquish control. Certain other hardware features are also desirable:
Memory protection : Does not allow the memory area containing the monitor to be altered
Timer : Prevents a job from monopolizing the system Privileged instructions
Privileged instructions : Certain machine level instructions can only be executed by the
monitor
The processor spends a certain amount of time executing, until it reaches an I/O instruction. It must
then wait until that I/O instruction concludes before proceeding. This inefficiency is not necessary.
We know that there must be enough memory to hold the OS (resident monitor) and one user program.
Suppose that there is room for the OS and two user programs. When one job needs to wait for I/O, the
processor can switch to the other job, which is likely not waiting for I/O Furthermore, we might
expand memory to hold three, four, or more programs and switch among all of them. The approach is
known as multiprogramming, or multitasking. It is the central theme of modern operating systems.
Multiprocessor Systems
Multiprocessor systems (also known as parallel systems or multicore systems) have two or more
processors in close communication, sharing the computer bus and sometimes the clock, memory, and
peripheral devices. Multiprocessor systems first appeared prominently appeared in servers and have
since migrated to desktop and laptop systems. Recently, multiple processors have appeared on mobile
devices such as smart phones and tablet computers. Multiprocessor systems have three main
advantages:
1. Increased throughput. By increasing the number of processors, more work can be done in less
time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple
processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts
working correctly. This overhead, plus contention for shared resources, lowers the expected gain from
additional processors. Similarly, N programmers working closely together do not produce N times the
amount of work a single programmer would produce.
2. Economy of scale. Multiprocessor systems can cost less than equivalent multiple single-processor
systems, because they can share peripherals, mass storage, and power supplies. If several programs
operate on the same set of data, it is cheaper to store those data on one disk and to have all the
processors share them than to have many computers with local disks and many copies of the data.
3. Increased reliability. If functions can be distributed properly among several processors, then the
failure of one processor will not halt the system, only slow it down. If we have ten processors and one
fails, then each of the remaining nine processors can pick up a share of the work of the failed
processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.
Increased reliability of a computer system is crucial in many applications. The ability to continue
providing service proportional to the level of surviving hardware is called graceful degradation. Some
systems go beyond graceful degradation and are called fault tolerant, because they can suffer a failure
of any single component and still continue operation.
The multiple-processor systems in use today are of two types:
Asymmetric multiprocessing : in which each processor is assigned a specific task. A boss processor
controls the system; the other processors either look to the boss for instruction or have predefined
tasks. This scheme defines a boss–worker relationship. The boss processor schedules and allocates
work to the worker processors.
Symmetric multiprocessing (SMP), in which each processor performs all tasks within the operating
system. SMP means that all processors are peers; no boss–worker relationship exists between
processors.
Simple Structure :
A system as large and complex as a modern operating system must be engineered carefully if it is to
function properly and be modified easily. A common approach is to partition the task into small
components, or modules, rather than have one monolithic system. Each of these modules should be
a well-defined portion of the system, with carefully defined inputs, outputs, and functions .
MS-DOS
– written to provide the most functionality in the least space
– not divided into modules
– Although MS-DOS has some structure, its interfaces and levels of functionality are
not well separated
UNIX
– limited by hardware functionality, the original UNIX operating system had limited
structuring. The UNIX OS consists of two separable parts.
o Systems programs
o The kernel
– Consists of everything below the system-call interface and above the physical
hardware
– Provides the file system, CPU scheduling, memory management, and other operating-
system functions; a large number of functions for one level
Layered Approach
The operating system is divided into a number of layers (levels), each built on top of lower layers.
The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of only
lower-level layers
System Calls
• System calls provide the interface between a running program and the operating system.
– Generally available as assembly-language instructions.
– Languages defined to replace assembly language for systems programming allow
system calls to be made directly (e.g., C. Bliss, PL/360)
• Three general methods are used to pass parameters between a running program and the
operating system.
– Pass parameters in registers.
– Store the parameters in a table in memory, and the table address is passed as a
parameter in a register.
– Push (store) the parameters onto the stack by the program, and pop off the stack by
operating system
• System programs provide a convenient environment for program development and execution.
These can be divided into:
– File manipulation
– Status information
– File modification
– Programming language support
– Program loading and execution
– Communications
– Application programs
Most users’ view of the operation system is defined by system programs, not the actual system calls
Dual-Mode Operation
• Sharing system resources requires operating system to ensure that an incorrect program
cannot cause other programs to execute incorrectly.
• Provide hardware support to differentiate between at least two modes of operations.
1. User mode – execution done on behalf of a user.
2. Monitor mode (also supervisor mode or system mode) – execution done on behalf of
operating system
Mode bit added to computer hardware to indicate the current mode: monitor (0) or user (1).
When an interrupt or fault occurs hardware switches to monitor mode. Privileged instructions can be
issued only in monitor mode. All I/O instructions are privileged instructions. OS must ensure that a
user program could never gain control of the computer in monitor mode (I.e., a user program that, as
part of its execution, stores a new address in the interrupt vector).
Considerations of memory protection and privileged instructions lead to the concept of modes of
operation. A user program executes in a user mode, in which certain areas of memory are protected
from the user’s use and in which certain instructions may not be executed. The monitor executes in a
system mode, or what has come to be called kernel mode, in which privileged instructions may be
executed and in which protected areas of memory may be accessed.
REENTRANT KERNEL :
A re-entrant program is one that does not modify itself and any global data. Multiple processes and
threads can execute re-entrant programs concurrently without interfering one another. They can share
re-entrant program but have their own private data.
A re-entrant kernel enables processes to give away the CPU while in kernel mode, not hindering
other processes from entering kernel mode.
If the kernel is not re-entrant, a process can only be suspended while it is in user mode (to be more
precise, it could also suspend the process in kernel mode, but would block kernel mode execution on
all other processes).
The reason for this is that all kernel threads share the same memory and corruption would
occur if execution would jump between them arbitrarily. That is, if the kernel is re-entrant, several
processes may be executed in the kernel mode at the same time when the process is suspended.
A typical use case is I/O wait. The process wants to read a file.
When the kernel is not reentrant : It calls a kernel function for this to stop the inside the kernel
function. Disk controller is asked for the data. Getting the data will take some time and the function
will block during that time.
With re-entrant kernel, the scheduler will assign the CPU another process until the interrupt
from the disk controller indicates that the data is available and our thread can be resumed.
Hence with this concept, throughput of the system increases rapidly.
A kernel that is not entrant needs to use a lock to make sure that no two processes are executing in
kernel mode at the same time.