OS__Lecture___3__(Process__Mgmt.)(1)
OS__Lecture___3__(Process__Mgmt.)(1)
Process State
Source: Feleke Merin (Dr. – Engr.) Page 1 of 14
Topic: Process Management
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. A process may be in one of the following state:
New. The process is being created.
Ready. The process is waiting to be assigned to a processor.
Running. Instructions are being executed.
Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
Terminated. The process has finished execution.
It is important to realize that only one process can be running on any processor core at any instant.
Many processes may be ready and waiting, however. The state diagram corresponding to these
states is presented in Figure below.
The transition from WAITING to READY is handled by the Process Scheduler and is
initiated by a signal from the I/O device manager that the I/O request has been
satisfied and the job can continue. In the case of a page fetch, the page interrupt
handler will signal that the page is now in memory and the process can be placed on
the READY queue.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Accounting. Contains information used mainly for billing purposes and performance
measurement. It indicates what kind of resources the job used and for how long.
Typical charges include:
1. Amount of CPU time used from beginning to end of its execution
2. Total time the job was in the system until it exited.
3. Main storage occupancy, how long the job stayed in memory until it finished
execution. This is usually a combination of time and space used, for example, in
a paging system it may be recorded in units of pageseconds.
4. Secondary storage used during execution. This again is recorded as a
combination of time and space use.
5. System programs used such as compilers, editors, or utilities.
6. Number and type of I/0 operations, including I/O transmission time, that
includes utilization of channels, control units, and devices.
7. Time spent waiting for I/O completion.
8. Number of input records read (specifically those entered on-line or coming from
optical scanners, card readers, or other input devices), and number of
output records written (specifically those sent to the line printer). This last
one distinguishes between secondary storage devices and typical 1/0 devices.
Contains information associated with each process
Process State - e.g. new, ready, running etc.
Process Control Block (summary)
Process Number – Process ID
Program Counter - address of next instruction to be executed
CPU registers - general purpose registers, stack pointer etc.
CPU scheduling information - process priority, pointer
Memory Management information - base/limit information
Accounting
Source: Feleke Merin (Dr. information
– Engr.) - time limits, process number
Page 5 of 14
1. there is a finite number of resources (such as disk drives, printers, and tape-drives);
2 . some resources, once they're allocated, can't be shared with another job (such as
printers); and
3. some resources require operator intervention that is, they can't be reassigned
automatically from job to job (such as tape drives).
What's a “good” process scheduling policy? There are several criteria that come to mind, but
notice in the list below that some of them contradict each other:
6. Ensure fairness for all jobs by giving everyone an equal amount of CPU and I/O time.
This could be done by not giving special treatment to any job, regardless of its
processing characteristics or priority.
As we can see from this list, if the system favors one type of user then it hurts another or
doesn't efficiently use its resources. The final decision rests with the system designer, who must
determine which criteria are most important for that specific system. For example, you might
decide to maximize CPU utilization while minimizing response time and balancing the use of
all system components through a mix of I/O-bound and CPU-bound jobs." So you would select
the scheduling policy that most closely satisfies your criteria.
Although the Job Scheduler selects jobs to ensure that the READY and I/O queues remain
balanced, there are instances when a job claims the CPU for a very long time before issuing an
I/O request. If I/O requests are being satisfied (this is done by an I/O controller and will be
discussed later), this extensive use of the CPU will build up the READY queue while emptying
out the I/O queues, which creates an unacceptable imbalance in the system.
To solve this problem the Process Scheduler often uses a timing mechanism and periodically
interrupts running processes when a predetermined slice of time has expired. When that
happens, the scheduler suspends all activity on the currently running job and reschedules
it into the READY queue; it will be continued later. The CPU is now allocated to another job that
runs until one of three things happens: the timer goes off, the job issues an 1/0 command, or
the job is finished. Then the job moves to the READY queue, the WAIT queue, or the FINISHED
queue, respectively. An I/O request is called a "natural wait" in multiprogramming
environments (it allows the processor to be allocated to another job).
A scheduling strategy that interrupts the processing of a job and transfers the CPU to another
job is called a preemptive scheduling policy, and it is widely used in time-sharing environments.
The alternative, of course, is a nonpreemptive scheduling policy, which functions without
Source: Feleke Merin (Dr. – Engr.) Page 7 of 14
Topic: Process Management
external interrupts (interrupts external to the job). Therefore once a job captures the processor
and begins execution, it remains in the RUNNING state uninterrupted until it issues an 1/0
request (natural wait) or it is finished (with exceptions made for infinite loops, which are
interrupted by both preemptive and non preemptive policies).
The OS maintains a separate queue for each of the process states and PCBs of all processes
in the same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been merged with the CPU.
Using this technique, a context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
Context switch time is pure overhead, because the system does no useful work while switching.
Switching speed varies from machine to machine, depending on the memory speed, the number of
registers that must be copied, and the existence of special instructions (such as a single instruction
to load or store all registers). Atypical speed is a several microseconds.
Context switches are computationally intensive since register and memory state must be saved and restored.
Context-switch time is overhead; the system does no useful work while switching.
Context-switch time dependent on hardware support.
Operation on Processes
Process creation
There are four principal reasons that cause processes to be created:
Source: Feleke Merin (Dr. – Engr.) Page 12 of 14
Topic: Process Management
i) System initialization – processes created at boot time, “background processes”, also called
daemon processes. (daemon processes: processes that stay in the background to handle
some activity such as e-mail, Web pages, news, printing, and so on)
ii) Execution of a process creation system call by a running process - a running process will issue
system calls to create one or more new processes to help it do its job.
iii) A user request to create a new process – a user can start a program (to be executed) by
(double) clicking an icon or typing a command.
iv) Initiation of a batch system – process creation applies to the batch systems (found on large
mainframes). Here users can submit batch jobs to the system (possibly remotely). When the
operating system decides that it has the resources to run another job, it creates a new
process and runs the next job from the input queue I it.
Q: What are the actions performed by the operating system when a new process is created?
Answer: OS performs the following actions when a new process is created:
1. Create a process control block (PCB) for the process
2. Assign process ID and priority
3. Allocate memory and other resources to the process
4. Set up the process environment (loading the process code in the memory (ready queue), …)
5. Initialize resource accounting information for the process.
Cooperating Processes
- The processes executing in the operating system may be either independent processes or
cooperating processes.
- A process is cooperating if it can affect or be affected by the other processes executing in the
system.
- There are several reasons for providing an environment that allows process cooperation:
i) Information sharing – E.g.: a shared file (we must provide an environment to allow
concurrent access to such information)
ii) Computation speed-up – We must break a particular task into subtasks, each of which will
be executing in parallel with the other.