0% found this document useful (0 votes)
27 views

Introduction To OS and Process Management

Uploaded by

abhishek.09tke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Introduction To OS and Process Management

Uploaded by

abhishek.09tke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Chapter 1: Introduction to Operating System and Process Management

INTRODUCTION:
Operating System:
➢ A program that acts as an intermediary between a user of a computer and the computer hardware.
➢ Operating system goals
• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use.
➢ Use the computer hardware in an efficient manner.

Computer System Structure or Components:


Computer system can be roughly divided into four components
➢ Hardware – provides basic computing resources
• CPU, memory, I/O devices
➢ Operating system
• Controls and coordinates use of hardware among various applications and users
➢ Application programs – define the ways in which the system resources are used to solve the
computing problems of the users
• Word processors, compilers, web browsers, database systems, video games
➢ Users
• People, machines, other computers

Fig: Computer System Structure


Operating System Need
➢ OS is a resource allocator
• Manages all resources-The operating system acts as the manager of the resources (hardware
and software) and allocates them to specific programs and users as necessary for tasks.
• Decides between conflicting requests for efficient and fair resource use.
➢ OS is a control program
• Controls execution of programs to prevent errors and improper use of the computer.
➢ No universally accepted definition
• The purpose of an operating system is to provide an environment in which a user can execute
programs.
• The primary goal of an operating system is thus to make the computer system convenient to
use.
• A secondary goal is to use the computer hardware in an efficient manner.
➢ “Everything a vendor ships when you order an operating system” is good approximation, but varies
wildly.
➢ “The one program running at all times on the computer” is the kernel. Everything else is either a
system program (ships with the operating system) or an application program.
Simple Batch System
➢ The users of early systems did not interact directly with the computer systems.
➢ The user prepared a job—which consisted of the program, the data, and some-control information
about the nature of the job (control cards)—and submitted it to the computer operator.
➢ The job would usually be in the form of punch cards.
➢ At some later time (perhaps minutes, hours, or days), the output appeared.
➢ To speed up processing, jobs with similar needs were batched together and were run through the
computer as a group. Thus, the programmers would leave their programs with the operator. The
operator would sort programs into batches with similar requirements and, as the computer, became
available, would run each batch. The output from each job would be sent back to the appropriate
programmer.
➢ A batch operating system, thus, normally reads a stream of separate jobs (from a card reader, for
example), each with its own control cards that predefine what the job does. When the job is complete,
its output is usually printed.

Fig: Memory layout of Simple Batch Operating System


Disadvantages
➢ It lacks the interaction between the user and the job while that job is executing. The job is prepared
and submitted, and at some later time, the output appears.
➢ In this execution environment, the CPU is often idle. This idleness occurs because the speeds of the
mechanical I/O devices are intrinsically slower than those of electronic devices.
➢ The difference in speed between the CPU and its I/O devices may be three orders of magnitude or
more. The introduction of disk technology has helped in this regard.
Spooling [simultaneous peripheral operation on-line]
The location of card images is recorded in a table kept by the operating system. When a job is
executed, the operating system satisfies its requests for card reader input by reading from the disk.
Similarly, when the job requests the printer to output a line, that line is copied into a system buffer and is
written to the disk. When the job is completed, the output is actually printed. This form of processing is
called spooling.
Spooling uses the disk as a huge buffer, for reading as far ahead as possible on input devices and
for storing output files until the output devices are able to accept them.

Fig: Spooling
Spooling overlaps the I/O of one job with the computation of other jobs. Even in a simple system,
the spooler may be reading the input of one job while printing the output of a different job. During this
time, still another job (or jobs) may be executed, reading their "cards" from disk and "printing" their
output lines onto the disk. Spooling has a direct beneficial effect on the performance of the system.
Multi Programmed Systems
➢ Spooling provides an important data structure: a job pool. Spooling will generally result in several
jobs that have already been read waiting on disk, ready to run. A pool of jobs on disk allows the
operating system to select which job to run next, to increase CPU utilization.
➢ A single user cannot, in general, keep either the CPU or the I/O devices busy at all times.
Multiprogramming increases CPU utilization by organizing jobs such that the CPU always has one to
execute.
➢ The operating system keeps several jobs in memory at a time as shown in the following figure.

Operating
System

Job1

Job2

Job3

Job4

Fig: Memory layout of Multiprogrammed Systems


➢ The operating system picks and begins to execute one of the jobs in the memory.
➢ Eventually, the job may have to wait for some task, such as a tape to be mounted, or an I/O operation
to complete.
➢ In a non multiprogrammed system, the CPU would sit idle.
➢ In a multiprogramming system, the operating system simply switches to and executes another job.
➢ When that job needs to wait, the CPU is switched to another job, and so on.
➢ Eventually, the first job finishes waiting and gets the CPU back.
➢ As long as there is always some job to execute, the CPU will never be idle.
Time Sharing Systems
➢ Time sharing, or multitasking, is a logical extension of multiprogramming.
➢ Multiple jobs are executed by the CPU switching between them, but the switches occur so frequently
that the users may interact with each program while it is running.
➢ The user gives instructions to the operating system or to a program directly, and receives an
immediate response.
➢ Time-sharing systems were developed to provide interactive use of a computer system at a reasonable
cost.
➢ A time-shared operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time-shared computer.
➢ A time-shared operating system allows the many users to share the computer simultaneously. Since
each action or command in a time-shared system tends to be short, only a little CPU time is needed
for each user.
➢ Computer can get slow performance, due to run multiple programs at a same time because main
memory gets more load while loading multiple programs. CPU is not able to provide separate time for
every program, and its response time gets increase. Main reason of occurring this problem is that it
uses to less capacity RAM. So, for getting solution can be increased the RAM capacity.
➢ As in multiprogramming, several jobs must be kept simultaneously in memory, which requires some
form of memory management and protection. So that a reasonable response time can be obtained,
jobs may have to be swapped in and out of main memory to the disk that now serves as a backing
store for main memory
➢ A common method for achieving this goal is virtual memory, which is a technique that allows the
execution of a job that may not be completely in memory.
Distributed Systems
➢ The processors do not share memory or a clock. Instead, each processor has its own local memory.
The processors communicate with one another through various communication lines, such as high-
speed buses or telephone lines. These systems are usually referred to as loosely coupled systems, or
distributed systems.
➢ The processors in a distributed system may vary in size and function. They may include small
microprocessors, workstations, minicomputers, and large general-purpose computer systems.
There is a variety of reasons for building distributed systems, the major ones being these,
• Resource sharing: If a number of different sites (with different capabilities) are connected to one
another, then a user at one site may be able to use the resources available at another. For example, a
user at site A may be using a laser printer available only at site B. Meanwhile, a user at B may access
a file that resides at A.
• Computation speedup: If a particular computation can be partitioned into a number of sub
computations that can run concurrently, then a distributed system may allow us to distribute the
computation among the various sites —to run that computation concurrently. In addition, if a
particular site is currently overloaded with jobs, some of them may be moved to other, lightly loaded,
sites. This movement of jobs is called load sharing.
• Reliability: If one site fails in a distributed system, the remaining sites can potentially continue
operating. If sufficient redundancy exists in the system (both hardware and data), the system can
continue with its operation, even if some of its sites failed.
• Communication: There are many instances in which programs need to exchange data with one
another on one system. When many sites are connected to one another by a communication network,
the processes at different sites have the opportunity to exchange information. Users may initiate file
transfers or communicate with one another via electronic mail. A user can send mail to another user at
the same site or at a different site.
Special Purpose Operating Systems
Real Time Systems
➢ A real-time system is used when there are rigid time requirements on the operation of a processor or
the flow of data, and thus is often used as a control device in a dedicated application.
➢ Sensors bring data to the computer. The computer must analyse the data and possibly adjust controls
to modify the sensor inputs.
➢ Systems that control scientific experiments, medical -imaging systems, industrial control systems, and
some display systems are real-time systems. Also included are some automobile-engine fuel-injection
systems, home-appliance controllers, and weapon systems. A real-time system has well defined, fixed
time constraints.
➢ Processing must be done within the defined constraints, or the system will fail.
There are two flavours of real-time systems.
➢ Hard Real Time Systems: A hard real-time system guarantees that critical tasks complete on time.
This goal requires that all delays in the system be bounded, from the retrieval of stored data to the
time that it takes the operating system to finish any request made of it.
• Secondary storage of any sort is usually limited or missing, with data instead being stored in
short term memory, or in read-only memory (ROM).
• Most advanced operating-system features are absent too, since they tend to separate the user
further from the hardware, and that separation results in uncertainty about the amount of time
an operation will take.
• The hard real-time systems conflict with the operation of time-sharing systems, and the two
cannot be mixed.
➢ Soft Real Time Systems: A less restrictive type of real-time system is a soft real-time system, where
a critical real-time task gets priority over other tasks, and retains that priority until it completes.
• Soft real time is an achievable goal that is amenable to mixing with other types of systems.
• Given their lack of deadline support, they are risky to use for industrial control and robotics.
• There are several areas in which they are useful, however, including multimedia, virtual
reality, and advanced scientific projects such as undersea exploration and planetary rovers.
• These systems need advanced operating-system features that cannot be supported by hard
real-time systems.
Handheld Systems
➢ A hand-held system refers to small portable devices that can be carried along and are capable of
performing normal operations. They are usually battery powered.
➢ Examples include Personal Digital Assistants (PDAs), mobile phones, palm-top computers, pocket-
PCs etc. As they are handheld devices, their weights and sizes have certain limitations as a result they
are equipped with small memories, slow processors and small display screens, etc.
➢ The physical memory capacity is very less hence the operating systems of these devices must manage
the memory efficiently. As the processors are slower due to battery problem, the operating system
should not burden.
Open-Source Operating System
➢ The term "open source" refers to computer software or applications where the owners or copyright
holders enable the users or third parties to use, see, and edit the product's source code. The source
code of an open-source OS is publicly visible and editable.
➢ Open-Source Software is licensed in such a way that it is permissible to produce as many copies as
you want and to use them wherever you like. It generally uses fewer resources than its commercial
counterpart because it lacks any code for licensing, promoting other products, authentication,
attaching advertisements, etc.
➢ The open-source operating system allows the use of code that is freely distributed and available to
anyone and for commercial purposes.
➢ Some basic examples of the open-source operating systems are Linux, Open Solaris, Free RTOS,
Open BDS, Free BSD, Minix, etc.
Advantages
➢ Cost-efficient – Most of the Open-Source OS is free. And some of them are available at a very cheap
rate than the commercial closed products.
➢ Reliable and efficient – Most of them are monitored by thousands of eyes since the source code is
public. So, if there is any vulnerability or bugs, they are fixed by the best developers around the world
➢ Flexibility- The great advantage is you can customize it as per your need. And there is creative
freedom.
PROCESS MANAGEMENT

Process Concepts
➢ An operating system executes a variety of programs:
• Batch system – jobs
• Time-shared systems – user programs or tasks
➢ The terms job and process can be used almost interchangeably
➢ Process: a program in execution; process execution must progress in sequential fashion.
➢ A process is more than the program code, which is sometimes known as the text section.
➢ It also includes the current activity, as represented by the value of the program counter and the
contents of the processor's registers.
➢ A process generally also includes the process stack, which contains temporary data (such as function
parameters, return addresses, and local variables), and a data section, which contains global variables.
➢ A process may also include a heap, which is memory that is dynamically allocated during process run
time. The structure of a process in memory is shown in figure.

Fig: Process in Memory


➢ A program by itself is not a process; a program is a passive entity, such as a file containing a list of
instructions stored on disk (often called an executable file), whereas a process is an active entity, with
a program counter specifying the next instruction to execute and a set of associated resources.
➢ A program becomes a process when an executable file is loaded into memory.

Concurrent processing is a computing model in which multiple processors execute instructions


simultaneously for better performance. Concurrent processes means which occurs, when something else
happens. The tasks are broken into sub-types, which are then assigned to different processors to perform
simultaneously, sequentially instead, as they would have to be performed by one processor. Concurrent
processing is sometimes synonymous with parallel processing.
Process State
As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process. Each process may be in one of the following states,
➢ new: The process is being created
➢ running: Instructions are being executed
➢ waiting: The process is waiting for some event to occur
➢ ready: The process is waiting to be assigned to a processor
➢ terminated: The process has finished execution

Fig: Diagram of Process State


Only one process can be running on any processor at any instant. Many processes may be ready
and waiting, however. The state diagram corresponding to these states is presented in above figure.

Process Control Block/ Task Control Block


➢ Each process is represented in the operating system by a process control block (PCB) also called as
task control block.
➢ It contains many pieces of information associated with a specific process, including -Process state,
Program counter, CPU registers, CPU scheduling information, Memory-management information,
accounting information, I/O status information and so on.

Fig: Process Control Block


➢ Process state: The state may be new, ready running, waiting, halted, and so on.
➢ Program counter: The counter indicates the address of the next instruction to be executed for this
process.
➢ CPU registers: The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information. Along with the program counter, this state information must be saved
when an interrupt occurs, to allow the process to be continued correctly afterward.
➢ CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
➢ Memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory system
used by the operating system.
➢ Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
➢ I/O status information: This information includes the list of I/O devices allocated to the process, a
list of open files, and so on.

Fig: Diagram showing CPU switch from process to process


Process Scheduling
Scheduling Queue
➢ As processes enter the system, they are put into a job queue, which consists of all processes in the
system.
➢ The processes that are residing in main memory and are ready and waiting to execute are kept on a
list called the ready queue.
➢ Set of processes waiting for an I/O device is called a device queue. Each device has its own device
queue.
➢ Processes migrate among the various queues

Fig: The ready queue and various device queues


A common representation of process scheduling is a queuing diagram, such as that in below
figure. Each rectangular box represents a queue. Two types of queues are present: the ready queue and a
set of device queues. The circles represent the resources that serve the queues, and the arrows indicate the
flow of processes in the system.

Fig: Queuing Diagram representation of Process Scheduling


A new process is initially put in the ready queue. It waits there until it is selected for execution, or
is dispatched. Once the process is allocated the CPU and is executing, one of several events could occur,
➢ The process could issue an I/O request and then be placed in an I/O queue.
➢ The process could create a new sub process and wait for the sub process’s termination.
➢ The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in
the ready queue.
Schedulers
The operating system must select, for scheduling purposes, processes from different queues in
some fashion. The selection process is carried out by the appropriate scheduler.
• Long-term scheduler
• Short-term scheduler
• Medium-term scheduler
Short-term Scheduler
➢ The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to
execute and allocates the CPU to one of them.
➢ The short-term scheduler must select a new process for the CPU frequently. A process may execute
for only a few milliseconds before waiting for an I/O request.
➢ Often, the short-term scheduler executes at least once every 100 milliseconds. Because of the short
time between executions, the short-term scheduler must be fast. If it takes 10 milliseconds to decide
to execute a process for 100 milliseconds, then 10 I (100 + 10) = 9 percent of the CPU is being used
(wasted) simply for scheduling the work.
Long-term Scheduler
➢ The long-term scheduler, or job scheduler, selects processes from this pool and loads them into
memory for execution.
➢ The long-term scheduler executes much less frequently; minutes may separate the creation of one
new process and the next. The long-term scheduler controls the degree of multiprogramming.
➢ Because of the longer interval between executions, the long-term scheduler can afford to take more e
time to decide which process should be selected for execution.
➢ In general, most processes can be described as either I/ O bound or CPU bound. An I/O bound
process is one that spends more of its time doing I/O than it spends doing computations. A CPU-
bound process, in contrast, generates I/O requests infrequently, using more of its time doing
computations. It is important that the long-term scheduler select a good process mix of I/O-bound and
CPU-bound processes.
➢ If all processes are I/0 bound, the ready queue will almost always be empty, and the short-term
scheduler will have little to do. If all processes are CPU bound, the I/O waiting queue will almost
always be empty, devices will go unused, and again the system will be unbalanced.
Medium-term Scheduler
➢ On some systems, the long-term scheduler may be absent or minimal. Some operating systems, such
as time-sharing systems, may introduce an additional, intermediate level of scheduling called
medium-term schedulers.
➢ The key idea behind a medium-term scheduler is that sometimes it can be advantageous to remove
processes from memory (and from active contention for the CPU) and thus reduce the degree of multi
programming. Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in,
by the medium-term scheduler. Swapping may be necessary to improve the process mix or because a
change in memory requirements has overcommitted available memory, requiring memory to be freed
up.

Fig: Addition of medium-term scheduling to the queuing diagram


Context Switch
➢ Switching the CPU to another process requires performing a state save of the current process and a
state restore of a different process. This task is known as a context switch.
➢ When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the
saved context of the new process scheduled to run.
➢ Context-switch time is pure overhead, because the system does no useful work while switching. Its
speed varies from machine to machine, depending on the memory speed, the number of registers that
must be copied, and the existence of special instructions.
➢ Typical speeds are a few milliseconds.
➢ Context-switch times are highly dependent on hardware support. For instance, some processors (such
as the Sun Ultra SPARC) provide multiple sets of registers. A context switch here simply requires
changing the pointer to the current register set.
Operations of Processes
The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and termination.
Process Creation
➢ A process may create several new processes, via a create-process system call, during the course of
execution.
➢ The creating process is called a parent process, and the new processes are called the children of that
process. Each of these new processes may in turn create other processes, forming a tree of processes.
➢ Most operating systems identify processes according to a unique process identifier (or pid), which is
typically an integer number.
➢ In general, a process will need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task.
• Parent and children share all resources.
• Children share subset of parent’s resources.
• Parent and child share no resources.
➢ When a process creates a new process, two possibilities exist in terms of execution.
• The parent continues to execute concurrently with its children.
• The parent waits until some or all of its children have terminated.
➢ There are also two possibilities in terms of the address space of the new process.
• The child process is a duplicate of the parent process (it has the same program and data as the
parent).
• The child process has a new program loaded into it.
➢ To illustrate these differences, let's first consider the UNIX operating system. Three system calls are
used in the mechanism of process creation. A system call is nothing but set of routines which supports
the activities of the kernel.
• fork system call creates new process
• exec system call used after a fork to replace the process’ memory space with a new program
• The parent can then create more children; or, if it has nothing else to do while the child runs, it
can issue a wait( ) system call to move itself off the ready queue until the termination of the child.

Fig: Process creation using fork( ) System Call


Process Termination
➢ A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit( ) system call.
➢ All the resources of the process-including physical and virtual memory, open files, and I/0 buffers are
de-allocated by the operating system.
➢ Termination can occur in other circumstances as well a process can cause the termination of another
process via an appropriate system call.
➢ Usually, such a system call can be invoked only by the parent of the process that is to be terminated.
Otherwise, users could arbitrarily kill each other's jobs.
A parent may terminate the execution of one of its children for a variety of reasons, such as these
• The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
If a process terminates (either normally or abnormally), then all its children must also be terminated.
This phenomenon, referred to as cascading termination, is normally initiated by the operating system.

Inter Process Communication:


➢ Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
➢ A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.
➢ A process is cooperating if it can affect or be affected by the other processes executing in the system.
Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation.
• Information sharing: Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to such
information.
• Computation speedup: If we want a particular task to run faster, we must break it into subtasks,
each of which will be executing in parallel with the others. Notice that such a speedup can be
achieved only if the computer has multiple processing elements (such as CPUs or I/O channels).
• Modularity: We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
• Convenience: Even an individual user may work on many tasks at the same time. For instance, a
user may be editing, printing, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to
exchange data and information. There are two fundamental models of interprocess communication.
• Shared Memory: In the shared-memory model, a region of memory that is shared by
cooperating processes is established. Processes can then exchange information by reading and
writing data to the shared region.
• Message Passing: In the message passing model, communication takes place by means of
messages exchanged between the cooperating processes.

Fig: Communication Models, 1) Message Passing 2) Shared Memory


Shared Memory Systems
➢ Inter-process communication using shared memory requires communicating processes to establish a
region of shared memory. Typically, a shared-memory region resides in the address space of the
process creating the shared memory segment.
➢ Other processes that wish to communicate using this shared memory segment must attach it to their
address space.
➢ Shared memory requires that two or more processes agree to remove this restriction. They can then
exchange information by reading and writing data in the shared areas.
➢ The form of the data and the location are determined by these processes and are not under the
operating system's control. The processes are also responsible for ensuring that they are not writing to
the same location simultaneously.
Message Passing System:
➢ The scheme requires that cooperating processes to communicate with each other via a message-
passing facility.
➢ Message passing provides a mechanism to allow processes to communicate and to synchronize their
actions without sharing the same address space and is particularly useful in a distributed environment,
where the communicating processes may reside on different computers connected by a network.
➢ A message-passing facility provides at least two operations: send(message) and receive(message).
➢ Messages sent by a process can be of either fixed or variable size.
➢ If only fixed-sized messages can be sent, the system-level implementation is straightforward but
programming is more difficult.
➢ Variable-sized messages require a 1nore complex system-level implementation, but the programming
task becomes simpler.
➢ If processes P and Q want to communicate, they must send messages to and receive messages from
each other; a communication link must exist between them. This link can be implemented in a variety
of ways.
These are several methods for logically implementing a link and the send or receive() operations,
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy